Google's new AI tool helps locate child sexual abuse material

Helps reviewers scan 700% more material


Google has announced that it has developed a new artificial intelligence tool to help combat the online spread of child sexual abuse material (CSAM).

The free AI tool uses deep neural networks for image processing to help companies and organizations that monitor CSAM content review 700 percent more material than they currently do.

In other words, in the time it takes to check one piece of content, seven pieces can be checked instead, or 700 pieces could be checked in the same amount of time it takes to check 100 pieces and so on.

Tools like Microsoft's PhotoDNA, at this moment, can help flag material on online platforms, but only if it was already marked as abusive. 

Google’s new tool will still require human review for confirmation, but will present the reviewer with the material most likely to be abusive, rather than requiring him or her to sort through each item.

If you're interested in using the Content Safety API service within your organization, you can reach out to Google via this form.