Content is removed or restricted on YouTube when it’s found to violate one of our policies, such as our Community Guidelines, or when it violates a law. To determine whether content is violative, we use a combination of automated systems and human reviews.
Automated systems
Our automated systems use machine learning, which allows them to use data from previous human reviews to identify potentially violative content.
Most of our systems are continuously supplied with millions of data points from human reviews. This means our automated systems can offer a high level of accuracy in detecting violations. Automated systems also provide efficient response times to our users for the high volume of content that YouTube receives.
When our systems have a high degree of confidence that content is violative, they may make an automated decision. However, in the majority of cases, our automated systems will simply flag content to a trained human reviewer for evaluation before any action is taken.
Human reviews
When a human reviewer checks potentially violative content, it means a trained human evaluates the content and makes a decision based on the relevant policy or law.
If content is found to be violative, our human reviewers may remove content or age-restrict it if it’s not appropriate for all audiences. If the content has an educational, documentary, scientific, or artistic purpose, we may allow it to remain on YouTube.
After a content decision is made, if the decision is appealed, a human will review the appeal and evaluate it on a case-by-case basis.
Frequently asked questions (FAQs)
More info
Community Guidelines
How YouTube identifies Community Guidelines violations
How YouTube evaluates Educational, Documentary, Scientific, and Artistic (EDSA) content
Copyright
How YouTube reviews copyright removal requests
YouTube Partner Program (YPP)
How YouTube enforces YouTube monetization policies
How YouTube reviews YPP applications
Privacy
How YouTube determines if content should be removed for a privacy violation