Google just released a new AI tool that aids companies and organizations when searching for inappropriate images of child sexual abuse. The tool was released on Monday and is gaining recognition within the internet policing community.
Lead Engineer Nikola Todorovic issued the following statement:
“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse. We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”
AI is spreading fast around the world and many tools similar to this are gaining widespread recognition. The software is free and works towards discovering illegal activity related to child abuse.
Charities, like the UK-based Internet Watch Foundation, are able to use the new AI tool provided by Google. The tool aims to make the internet a safer place.
Susie Hargreaves, the CEO of the Internet Watch charity, issued the following statement:
“We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”
Google has some of the longest experience in AI technology in the industry’s history. They continue to work towards new useful tools.
Comment Template