Google has announced that it has developed a new AI tool to help organisations tackle child sexual abuse material (CSAM) online. The new AI uses deep neural networks for image processing and can assist reviewers sorting through images to find the most likely CSAM content and get it removed if the content is illegal.
The web giant said that it will be making the tool available for free to NGOs and industry partners via the Content Safety API. Discussing the API, Susie Hargreaves OBE, CEO, Internet Watch Foundation, said:
“We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”
According to Google, the process of identifying CSAM online is sped up considerably with the new tool. The firm stated that it can help a reviewer find and take action on 700% more CSAM content over the same period of time, in other words, the time it takes to check one piece on content, seven pieces can be checked instead, or 700 pieces could be checked in the same amount of time it takes to check 100 pieces and so on.
If you’re interested in using the Content Safety API service within your organisation, you can reach out to Google via this form.