What is CSAM?
What is Google’s approach to combating CSAM?
How does Google identify CSAM on its platform?
We invest heavily in fighting child sexual exploitation online and use technology to deter, detect, and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs. We deploy hash matching, including YouTube’s CSAI Match, to detect known CSAM. We also deploy machine learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Detection of never-before-seen CSAM helps the child safety ecosystem in a number of ways, including identifying child victims in need of safeguarding and contributing to the hashset to grow our abilities to detect known CSAM. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritize abuse content for human review.
Both CSAI Match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms—please see here for more details.
What does Google do when it detects CSAM on its platform?
NCMEC serves as a clearinghouse and comprehensive reporting center in the United States for issues related to child exploitation. Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world.
What is a CyberTipline report and what type of information does it include?
Once Google becomes aware of apparent CSAM, we make a report to NCMEC. These reports are commonly referred to as CyberTipLine reports, or CyberTips. In addition, we attempt to identify cases involving hands-on abuse of a minor, production of CSAM, or child trafficking. In those instances, we send a supplemental CyberTip report to NCMEC to help prioritize the matter. A report sent to NCMEC may include information identifying the user, the minor victim, and may include the violative content and/or other helpful contextual data.
Below are some examples of the real world impact of CyberTip reports Google has submitted. They provide a glimpse at the wide range of reports we make, but they are not comprehensive.
- A Google Cybertip reported numerous pieces of CSAM involving elementary school children taken in a classroom setting. Some of the reported CSAM was previously unidentified by Google and appeared to have been produced by the Google Account holder. NCMEC forwarded the report to law enforcement, which led to the identification and safeguarding of two minor children depicted in the reported CSAM imagery.
- A Google Cybertip reported the solicitation and production of CSAM by an account holder, who requested numerous videos to be made that depicted the hands-on-abuse of dozens of minor boys in exchange for money. NCMEC forwarded the report to law enforcement. The account holder was convicted for production of CSAM and several dozens of children were identified and safeguarded from ongoing abuse.
- A Google CyberTip reported a single piece of known CSAM content that led to the apprehension of the account holder, who, according to law enforcement, was found to be in possession of much more CSAM and directly involved in the hands-on-abuse of minors in their care and providing those minors for others to abuse as well. Due to the efforts by Google, NCMEC, and law enforcement, three children were rescued from sexual abuse.
- A Google Cybertip reported CSAM that was produced by the Google account holder and solicited from minors the account holder had online access to. The account holder was later apprehended and determined by law enforcement to be in a position of trust as a medical professional: they used this position to abuse patients in their care and had direct access to minors online from whom they solicited the production of CSAM.
How does Google combat risks of CSAM in the Generative AI (GenAI) space?
AI-generated CSAM or computer-generated imagery depicting child sexual abuse is a threat Google takes very seriously. Our work to detect, remove, and report CSAM has always included violative content involving actual minors, modified imagery of an identifiable minor engaging in sexually explicit conduct, and computer-generated imagery that is indistinguishable from an actual minor engaging in such conduct.
Google places a heavy emphasis on child safety when creating our own GenAI models and products. We follow Google’s Responsible Generative AI principles in protecting all of Google’s publicly available models and the services built on top of these models.
We deploy a variety of child safety protections for our GenAI models and products. This can include protections against the presence of CSAM in the training data underlying our models, against CSAM-seeking and-producing prompts, and against violative outputs. We also conduct robust child safety testing on our models prior to public launch to understand and mitigate the possibility of CSAM being generated.
We also work with others in the child safety ecosystem - including the Technology Coalition and child safety NGOs - to share and understand best practices as this technology continues to evolve.
What does Google do to deter users from seeking out CSAM on Search?
How does Google contribute to the child safety ecosystem to combat CSAM?
Google’s child safety team builds technology that accurately detects, reports and removes CSAM to protect our users and prevent children from being harmed on Google products. We developed the Child Safety toolkit to ensure the broader ecosystem also has access to this powerful technology, and to help prevent online proliferation of child sexual abuse material. Additionally, we provide Google’s Hash Matching API to NCMEC to help them prioritize and review CyberTipline reports more efficiently, allowing them to hone in on those reports involving children who need immediate help.
We also share child sexual abuse and exploitation signals to enable CSAM removal from the wider ecosystem. We share millions of CSAM hashes with NCMEC’s industry hash database, so that other providers can access and use these hashes as well. We also signed onto Project Lantern, a program that enables technology companies to share relevant signals to combat online sexual abuse and exploitation in a secure and responsible way, understanding that this abuse can cross various platforms and services.
We are also an active member of several coalitions, such as the Technology Coalition, the WeProtect Global Alliance, and INHOPE, that bring companies and NGOs together to develop solutions that disrupt the exchange of CSAM online and prevent the sexual exploitation of children. Google prioritizes participation in these coalitions, and in our work with NGOs like NCMEC and Thorn, we share our expertise, explore best practices, and learn more about the latest threats on key child safety issues.