YouTube brand safety: Description of methodology

Accredited by Media Rating Council

The Media Rating Council accreditation certifies that YouTube in-stream video ads, suitability controls (inclusive of Inventory Modes), and the Advertiser Safety Error Rate metric, adhere to the industry standards for content level brand safety processes and suitability controls. This applies to YouTube in-stream video inventory purchased through Google Ads, Display & Video 360, and YouTube Reserve services.

The industry guidelines were developed in an effort coordinated by the Interactive Advertising Bureau (IAB), the Media Rating Council (MRC), the American Association of Advertising Agencies (4A’s), Global Alliance for Responsible Media (GARM), and the Association of National Advertisers (ANA). YouTube was audited against these guidelines by an independent third-party auditor engaged by the MRC.


About Google’s YouTube brand safety accreditation

The accreditation is focused on YouTube’s brand safety processes and suitability controls applied to YouTube’s in-stream video ads. This relates to:

  • The measurement and reporting of digital in-stream video ad impressions and the related viewability metrics across desktop, mobile web and mobile application environments net of general invalid traffic (GIVT), sophisticated invalid traffic (SIVT), and brand unsafe content across Google Ads, Display & Video 360, and YouTube Reserve services
  • Ad placements using YouTube’s Inventory Mode suitability controls (Expanded, Standard, and Limited modes)
  • The reporting of advertiser safety error rate at the YouTube platform level

The following ad types and device types are in-scope for this accreditation:

  • Desktop, mobile app, and mobile web
  • Skippable in-stream ads
  • Non-skippable in-stream video ads
  • Bumper ads

Exclusions from the accreditation

The following are excluded from the accreditation:

  • All non-YouTube and Google video partners inventory
  • Inventory accessed from YouTube Kids, YouTube Music, and over-the-top devices (for example, connected TVs)
  • Non-in-stream ads (for example, Masthead and Shorts ad formats)
  • Live stream inventory
  • Sites and apps where ads on embedded YouTube videos appear
  • Performance optimization targeting tools and suitability controls (for example: topic classifiers, geolocation, keywords, or audiences)
  • Specific YouTube channel targeting (or exclusion)
  • Third-party and Partner Sold campaigns

Brand safety methodology

Google’s YouTube brand safety video classification methodology encompasses policies that determine what content is permitted on YouTube and eligible to monetize through ads, technology to analyze the tremendous amount of video on the platform, and a team of human raters to augment classifications made by Google AI.

The initial safety layer is the YouTube Community Guidelines, which are out of scope of this accreditation audit but provide common-sense rules about what is allowed on YouTube.

When a video is uploaded to YouTube, the video’s attributes (such as visual data, audio data, comments, and other metadata) are analyzed by Google's AI models to classify the video against our advertising policies (for example, Advertiser-Friendly Guidelines) and topic taxonomy. We also collect creator-provided ratings as another input into our monetization decisioning. Only content meeting our monetization policies will be eligible to show ads.

In addition to classification made by Google AI, many of our videos undergo manual review by our trained third party raters to determine ad eligibility. The millions of videos the raters evaluate train Google AI to identify similar videos in the future. The geographically distributed extended workforce ensures a global perspective in decisioning and training, and our policy is developed while consulting specialists with language and cultural expertise. Additionally, all content available in YouTube Select Core Lineups is manually reviewed prior to showing any ads.

Google’s YouTube brand safety models utilize supervised algorithms to replicate the methods performed by Google’s human raters to classify YouTube content, using the same available video content features, in a scalable manner with a high degree of accuracy. Our brand safety Google's AI models are updated daily on fixed model architecture versions in order to utilize fresh training data from videos newly reviewed by our manual reviewers, and we additionally launch new classifier architectures and reclassify the corpus every few months. Google's AI classifiers analyze and provide monetization decisions for all YouTube videos, which are visible to the creators in the YouTube Partner Program (as detailed in Monetization icon guide for YouTube Studio). We apply the same monetization policies to all YouTube videos whether they're on YouTube.com or embedded, and all ads purchased through Google Ads, YouTube Reserve, or Display & Video 360 abide by those decisions. With Google AI, we ensure age-appropriate suggestions in adjacency environments (for example, Watch Next recommendations). Google also employs additional controls specifically for analyzing live streaming video content.


Advertiser Controls

All advertisers are defaulted to only show ads against content that meets the Advertiser-Friendly Guidelines, which can be seen mapped to the GARM Brand Safety Floor. This default is applied to all YouTube campaigns, including those using any Inventory Mode selection (Expanded, Standard, or Limited mode).

We also provide advertisers with access to additional account-level suitability settings (Google Ads and YouTube Reserve and Display & Video 360) to help them exclude content (for example: inventory types, live streams) that, while in compliance with our policies, may not fit an advertiser’s brand or business. More information on available controls can be found here.

Advertisers are advised to apply exclusions to their campaign prior to the campaign launch. Advertisers can escalate specific exclusions through their Account Managers or by using this form.


Reporting

YouTube, applicable as per the scope above, is 99% brand safe as defined by the GARM Brand Safety Floor (BSFF), a framework created in collaboration with several industry members, including platforms and advertisers. Our advertiser safety error rate, which is under 1%, fully encompasses content considered unsafe by the BSFF. This section provides an overview of our advertiser safety error rate at the platform level.

YouTube’s existing monetization policy organization matches the BSFF’s categories closely, enabling our advertiser safety error rate measurement. We refine and iterate on policy in the standard course.

The advertiser safety error rate, indicating how often unsafe content is incorrectly monetized, is calculated as follows:

  • Advertiser safety error rate = # of impressions on unsafe content / # total impressions

We take 1,000 impression-weighted random samples a day (for 5 days a week) from across all ad impressions on YouTube. Each impression is associated with one video, which is human reviewed by trained raters and given a Brand Safety decision. We then calculate the advertiser safety error rate as a 60-day average across all 60,000 impressions.

Because the advertiser safety error rate is low (under 1%), we see variance in the day-to-day of the advertiser safety error rate value from each set of 1,000 impressions. While volatile day-to-day, these measurements can be aggregated into a moving average that provides a robust estimate of our advertiser safety error rate, with 95% confidence and error margins between 5%–10% relative to the measured value of under 1%.


Known limitations of brand safety systems

Google believes the below limitations to be insignificant:

  • User data deletion and privacy. Google places the utmost value on stewarding user data. As a result, when a video is deleted from YouTube (including when a user deletes their Google Account) or a video is marked as private on YouTube, we are not able to include it for measurement purposes.
  • Ad campaigns can be configured with a wide variety of settings, targeting, and exclusion criteria. Narrowly defined campaigns may show some variability in advertiser safety error rates. This variance has shown to be insignificant. Google conducts periodic studies at lower granularity to ensure that the platform-wide error rate is truly representative of customer expectations.
  • New trends. Google’s YouTube brand safety policies prohibit ads from running on violative content. As YouTube’s video library evolves, our systems adapt to newly found types of content and our policies improve to reduce ambiguities.
  • Sampling for human review. Due to the sheer volume of videos uploaded to YouTube on a regular basis, we use a robust sampling methodology detailed above that is representative of the full population.

Client notification

While we endeavor to minimize the platform-wide advertiser safety error rate at all times, we will notify clients in the event that it should exceed 1% by using the notification area at the top of this Help Center article. The notification shall remain in place until the error rate is reliably below 1%.

For the past 12 months, the platform-wide advertiser safety error rate was greater than 1% on the following days:

  • None

Business partner qualification

YouTube platform level ad policies apply to all parties. Learn more about the ad policies for advertisers. Additionally YouTube partner program guidelines govern which channels are eligible for monetization. Google maintains strong quality processes for onboarding and continuous quality monitoring of the third-party human raters.

Any substantive changes in our above stated brand safety methodology will be communicated as part of this help center article, in addition to any regular methodology changes being communicated as part of the ad buying platform (Google Ads/YouTube Reserve and Display & Video 360).

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Google apps
Main menu
8007576107853235502
true
Search Help Center
true
true
true
true
true
73067
false
false
false