This article summarizes Google’s approach to combatting Child Sexual Abuse and Exploitation (CSAE) on our services, which is also detailed in our Child Safety page, global CSAM Transparency Report and FAQs, and Google's Privacy Policy.
As set out in further detail in our Privacy Policy, we process user information to combat CSAE on our services, including for the purposes of protecting children from child sexual abuse, ensuring that we are keeping our platforms free from abuse, and protecting our users from this type of egregious and oftentimes illegal content.
Google does not permit the use of its services to create, share, upload, or send CSAE material that meets the below descriptions. We use a combination of specific technologies and human confirmation processes to identify and remove this type of material, and to inform the appropriate and proportionate enforcement of our policies. We do this with safeguards in place to protect your privacy.
We act on the following kinds of content:
- Child sexual abuse and exploitation (CSAE) content - including child sexual abuse material (CSAM), obscene visual representations of children (for example, CSAM cartoons), the advertisement or solicitation of CSAM, and content that provides instructions on how to carry out child sexual abuse
- Child grooming - for example, befriending a child online to facilitate, either online or offline, sexual contact or exchanging sexual imagery with that child
- Sextortion - for example, the use of real or alleged access to a child’s intimate images to threaten or blackmail a child
- Sexualization of a minor - for example, imagery that depicts, encourages or promotes the sexual abuse of children or the portrayal of children in a manner that could result in the sexual exploitation of children
- Trafficking of a child - for example, advertising, or solicitation of a child for commercial sexual exploitation
How Google detects and reports online CSAE
We use different techniques to analyze the content of your Google Account for CSAM. We identify CSAM with trained specialist teams and automated technology, including artificial intelligence, blocking and filtering technology, and hash-matching technology, which creates a “hash,” or unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM. Hash matching enables us to detect and remove previously seen CSAM at scale, while artificial intelligence helps us identify new, not previously detected CSAM.
We primarily use technologies to proactively detect violations of our policies, and this can be combined with other techniques, including human review and oversight, to ensure quality control and accuracy of these technologies. For example, we have independent verification processes in place to ensure the quality of our CSAM hash database, we human review all newly identified CSAM before taking action on it, we conduct internal quality control over our CSAM processing, and we continuously update our approach to consider context and nuance in this work.
When we detect CSAM on our platforms, we remove it and make a “CyberTipline” report to the National Center for Missing & Exploited Children (NCMEC) or other relevant authorities around the world. When we make a report to NCMEC, we may include relevant information such as the violative content, contextual data, and/or personal information identifying the user or the minor victim. NCMEC, in turn, forwards reports to law enforcement authorities around the world, as appropriate.
We may also apply further review of the Google Account if CSAM is identified. This review may involve examining surrounding content, account metadata signals, and other relevant factors to determine appropriate and proportionate enforcement action in relation to the material identified in the Google Account and to identify and report additional CSAM. Depending on the nature and severity of the violation, we may provide an in-product educational warning, restrict access to certain Google products or services, or disable the Google Account. To prevent future violations of our policies by the same Google Account or related accounts, we may apply mechanisms for safeguarding against abuse, such as additional CSAM detection, as appropriate. Our goal is to ensure that any action taken is proportionate, effective, and adheres to legal requirements.
We also respond to reactive reports of CSAE abuse on our platform. Our response could include reviewing the contents of a Google Account to corroborate and confirm the abuse reports so that we can take appropriate action swiftly.
When we disable an account, users are notified of Google’s decision to disable their account and are provided with information on redress options available to them, including user appeals, to regain access to their account.
When you submit an appeal, Google reviews your Google Account’s data to see what happened. By submitting an appeal, you agree to let Google conduct this review to facilitate your appeal. All CSAE-related user appeals will be reviewed up to 2 times. If the first appeal isn’t approved, you can submit a second appeal with more information, which will be re-evaluated by a Google reviewer. Any appeals after that will be closed.
Where a user is successful in appealing their account disablement, we will reinstate their account. See further information on how to appeal Google's decision to disable your account.
How Google contributes to the child safety ecosystem to combat CSAE
To help prevent online proliferation of child sexual abuse material in the broader ecosystem, we developed the Child Safety toolkit to share this powerful technology with others in industry and NGOs. Additionally, we provide Google’s Hash Matching API to NCMEC to help them prioritize and review CyberTipline reports more efficiently, allowing them to hone in on those reports involving children who need immediate help.
We also work with partners to share abusive CSAM content signals to enable removal from the wider ecosystem. We share millions of CSAM hashes with NCMEC’s industry hash database, so that other providers can access and use these hashes as well. We also signed onto Project Lantern, a program that enables technology companies to share relevant abuse signals, which may include personal data, to combat online sexual abuse and exploitation.
How long your data is retained for
We retain any CSAM imagery and related data that we report to relevant authorities for a limited period that is the longer of one of these:
- The period required by applicable laws, and to ensure we are able to respond to valid legal process
- The period necessary for the purposes of improving our CSAM detection technologies to combat the proliferation of online CSAE.
CSAM retained for technology improvement purposes is de-identified and no longer tied to a Google Account identifier. Our criteria for determining and reviewing retention periods have regard to what is necessary to ensure that our CSAM detection technologies are robust, accurate, and effective, and to effectively combat CSAE on our platforms and services.
Our Privacy Policy provides more information about Google’s data retention practices and describes why we hold onto different types of data for different periods of time.
Detection of CSAM in accordance with Regulation (EU) 2021/1232
Within the European Union, some of our communication services detect CSAM under the Regulation (EU) 2021/1232, which provides for a derogation from the confidentiality of communications under Articles 5(1) and 6(1) of the Directive 2002/58/EC for the purpose of combating online child sexual abuse.
CSAM content processed for the purpose of detecting, removing, and reporting CSAM from our electronic communication services within the scope of the Regulation will be stored for a maximum period of 12 months from when it is identified and reported, unless Google is in receipt of valid legal process requiring a longer storage period.
Under Regulation 2021/1232, users within the European Union in addition to submitting an appeal to restore their account, can lodge a complaint to their country’s relevant data protection authority and they have the right to seek a judicial remedy before a competent court.
For more information, please see our Transparency Report under Regulation (EU) 2021/1232.