The following is a summary of industry standards for measuring clicks and video impressions/viewability/TrueView views.
Google provides translated versions of our Help Centre, although they are not meant to change the content of our policies. The English version is the official language that we use to enforce our policies. To view this article in a different language, use the language drop-down at the bottom of the page.
The current Media Rating Council (MRC) accreditation certifies that:
|
The industry guidelines were developed in an effort coordinated by the Interactive Advertising Bureau (IAB) and the MRC to govern how interactive advertising clicks are counted and how invalid clicks are detected and handled. A CPA firm engaged by the MRC to perform the audit conducted the audit against these guidelines.
You'll find a summary below of the click measurement process employed by Google Ads and AdSense. For additional resources, please visit IAB/MRC Click Measurement Guidelines, which provide a description of the IAB standards for counting online ad clicks.
What is Google accredited for?
The accreditation certifies that Google's measurement technology adheres to the industry standards for counting interactive advertising metrics and that its processes supporting this technology are accurate.
The scope for the Google Ads accreditation includes:
Google Ads Clicks Audit
- Clicks
- Invalid Clicks
Google Ads Video Audit (Video Viewability Report)
- Impressions
- Invalid Impressions
- Measurable Impressions
- Non-measurable Impressions
- Viewable Impressions
- Non-viewable impressions
- Measurable rate
- Viewable rate
- Impression distribution
- TrueView: Views
- TrueView: Invalid Views
Environments: Desktop, Mobile App, Mobile, Web and CTV.
*Video Viewability Metrics have been accredited for Desktop, Mobile App and Mobile Web environments only.
The accreditation certifies that Google’s video impression and viewability measurement technology adheres to the industry standards for counting video advertising impressions and measuring viewability rates.
What's included in the audit process?
This audit is focused on Google's pay-per-click, video impressions, TrueView views and video viewability advertising systems. Google provides these advertiser-facing solutions through Google Ads and publisher-facing solutions through AdSense and YouTube.
Only the Video Viewability Report is submitted for MRC Video Metrics accreditation.
Google advertisements may be administered to users through the following products or services: AdSense for Content (AFC), AdSense for Domains (AFD), AdSense for Search (AFS), Ad Exchange (AdX), YouTube and Google.com. AFC relates to advertisements displayed on the pages of a partner site, where the context of the information on the page is used to determine and display relevant advertisements. AFD relates to advertisements displayed on the pages of a particular domain, where the domain name itself is analogous to a search query. AdX relates to advertisements displayed on participating partner sites, where the context of the page and real-time bidding is used to determine and display relevant advertisements. YouTube relates to advertisements displayed on YouTube.com or in the YouTube app, where the context of the video and search queries are used to determine and display relevant advertisements. AFS and Google.com relate to advertisements displayed as paid results within the context of search engine query and results.
What is not included in the audit process?
Google's non-video impression-based advertising solutions, such as Google Marketing Platform, and systems which measure clicks for non-commercial purposes (such as Google search) are outside of the scope of this audit. Other systems outside the scope of this audit include related support and management systems such as Google Analytics. In addition:
- Google Ads report builder and other dashboard video metrics aren't part of the accreditation process.
- Other device types are also not part of the accreditation process.
- Video campaign and App campaign Clicks are not a part of the MRC accreditation process for Clicks.
- Dashboard metrics segmentation (for example, demographics) beyond display and search campaign totals and device type segmentation, are not part of the MRC accreditation process.
Click measurement methodology
The measurement methodology is based on all click activity recorded, and doesn't utilise sampling for the purposes of click measurement. Only stage 2.2 of the IAB click referral cycle (measured clicks) are directly observed by Google. With respect to the click-referral cycle, upon receipt of the initial click transaction by the ads redirect server, Google Ads records the click and issues a non-cacheable HTTP 302 redirect to the browser based on the location established by the Advertisers for the specific advertisement. This constitutes the measured click. The click measurement methodology is the same across all device types (desktop, mobile, tablet) and for browser and mobile apps unless otherwise noted.
Furthermore, currently, parallel tracking is mandatory for Search, Shopping, Display and video campaigns. Parallel tracking sends customers directly from your ad to your final URL (landing page) while click measurement happens in the background (without sending them to the tracking URLs first).
Ads can be displayed on mobile devices that are supported by the Google Mobile Ads SDK (see a list of currently supported platforms).
A known limitation of this method of measuring clicks is that a network interruption may cause a user who successfully receives a 302 redirect to not be able to view the resulting advertiser website.
The counting methodology utilised is the multiple-click-per-impression method. Consequently, to avoid inappropriate counting of navigational mistakes (for example, multiple clicks per user), we require that the time between a given click and a previous click on the ad impression is greater than a specific period of time.
Logs are generated and processed in real time, storing all data associated with observed HTTP transactions. Numerous variable and heuristic techniques are utilised to implement the click filtration systems, which won't be enumerated here to protect their security.
Both Google and its partners deliver the advertisements to users; however, Google maintains control over and performs the processes related to measurement and advertiser reporting of click activity. When a user clicks on an advertisement, whether delivered by Google or a partner, via any one of the products administering the product, the click activity is tracked by Google Ads through the ads redirect servers.
Measurement of click activity is based on the Google Ads click measurement methodology, which utilises a technology infrastructure to manage and monitor click events. A click is recorded (measured) when Google Ads has received an initiated click and sends the user an HTTP 302 redirect to the advertiser landing page or website (or other intermediate server such as an advertiser's agent). These measured click events are recorded to data logs within an event file system. The data log files are then accumulated, edited and compiled through fully automated processes to produce click measurement and advertiser reporting. The editing process includes the process of filtering erroneous or corrupt data, identified non-human traffic including robots and other automated processes and other identified invalid click activity. The filtered clicks are considered invalid, which means they are not billable to the advertiser. Google prepares click reports for advertisers, which can be directly accessed by the respective advertiser.
Click measurements can be reported aggregated by geographical location (not subject to MRC accreditation) and device type. Geographical location is based on the user’s IP address or from a publisher-provided location (publishers must obtain user permission to provide such location). Note that some traffic may be routed through a service provider’s proxy servers and so might not correctly reflect the user’s actual locations (for example, mobile carriers may proxy mobile traffic). Device type classification (computer, tablet and mobile devices) is based on information from the HTTP header-using libraries operated by Google.
In some AdSense implementations, partners render ads on their own site subject to their own design and formatting rules and control the clickable area around the ad. In these implementations, adjustment of this area is beyond Google's control. In regular AdSense implementations, Google both controls the clickable area as well as renders the ad impression directly to the end user.
Video impression, viewability and TrueView views measurement methodology
Google allows Google Ads users to create video campaigns, upload and manage creatives and set bidding strategies and related targeting for their campaign. Google Ads video ad content must be hosted on YouTube; however, these video ads can appear on YouTube and on video partner sites and apps across the Google Display Network (GDN).
Google’s proprietary Interactive Media Ads Software Development Kit (IMA SDK) is integrated directly into the YouTube video player, the YouTube mobile app, or video partner sites and apps to facilitate communication between the video players and the ad server for video measurement. Google maintains two versions of the IMA SDK, one which supports Flash and the other which supports HTML5. The IMA SDK is a Video Ad Serving Template (VAST) (versions 2.0, 3.0 or 4.0) with a compliant tag implementation used to measure both linear and non-linear video ad content to serve and track digital video ads. The IMA SDK also supports Video Player Ad-Serving Interface (VPAID) (version 2.0) that allows the video ad and video player to communicate with each other, and Video Multiple Ad Playlist (VMAP) that allows multiple ads to be played within the video ad content.
All measured YouTube video ads included in the video viewability report are delivered in-stream. For video ad impressions, measurement utilises the count-on-begin-to-render methodology. When properly implemented by the video ad content publisher, the Google Ads IMA SDK solutions are consistent with the Video Impression Guidelines requirements regarding post-buffering initiation of the measurement event. TrueView in-stream ads are often referred to as 'skippables' as they have a skip button and give viewers the option to skip the ad after 5 seconds and run in-streams (pre, mid or post-roll) of a video. TrueView views are a cost per view format, which means that we only charge the advertiser when the viewer 'views' the ad. TrueView views are not related to viewability. With TrueView in-stream ads, you pay when a viewer watches 30 seconds of your video (or the full duration of the video if it's shorter than 30 seconds) or interacts with your video, whichever comes first. A view is defined for TrueView in-stream ads in the following ways:
TrueView in-stream:
- Watch 30 seconds (includes the five-second forced duration), or to completion if the ad is less than 30 seconds
- Click channel title/avatar*
- Click the video title*
- Click on card’s teaser*
- Click on Share*
- Click companion banner/Video Wall*
- Click on call-to-action extension*
- Click to visit advertiser’s site*
- Click on end screens*
*(Not material, hence not included in MRC accreditation audit). Currently the above non-material interactions only represent 1.8% of total TrueView views traffic for campaigns. Actions that are not considered a view include clicks on the following:
- Annotations
- Like (positive)
- Full screen
- InVideo programming (doesn't serve on TrueView in-stream ads)
- Watermark
- Skip button
When an advertiser is charged for a view as shown in Google Ads UI, a view will also increment the public YouTube.com viewcount.
Upon receipt of the measurement event, Google maintains control over subsequent processing and reporting. Google Ads uses a combination of user-agent and mobile app SDK data from internal and external sources to classify device types. Google Ads doesn't rely on any third party to perform classification.
In some instances, continuous play is a factor, such as when Autoplay is active or the user is viewing a video in a playlist. When this is the case, certain rules are followed. When using Wi-Fi, continuous play will stop automatically after four hours. When using a mobile network, continuous play will stop if you have been inactive for 30 minutes. Currently, Google quantifies GVP publishers utilising continuous play (traffic volume is 7%). 1% of GVP traffic is defined as continuous play. Approximately 22% of video traffic is autoplay. Learn more aboutAutoplay videos.
Google indicated that companion display ads are measured independently of video ad impressions and are not reported in the Google Ads Video Viewability Report; as a result, measurement and reporting of companion display ads are excluded from the scope of this engagement.
For video viewability, Google Ads utilises the Active View description of methodology to measure viewability as reported within the Google Ads reporting platform. Google Ads counts a viewable video impression when at least 50% of the video ad creative appears within the viewable area of a user’s browser/app for two continuous seconds. However, for the 'Vertical video ads' format, Google Ads measures the viewability of the player rather than the creative. Advertisers must opt in to buying this format.
Filtration methodology
Google tries to identify and filter both general and sophisticated invalid traffic continuously through data-based identifiers, activities and patterns. This identification and filtration is done across clicks and video impressions, and includes non-human activity and suspected fraud. However, because user identification and intent cannot always be known or discerned by the publisher, advertiser or their respective agents, it is unlikely that all invalid traffic can be identified and excluded from the reported results proactively. To protect invalid traffic filtration processes from becoming compromised or reverse-engineered, no details of specific filtration procedures, beyond those detailed here, will be disclosed, other than to auditors as part of the audit process.
Both specific identification (including obeying robot instruction files, filtration lists and publisher test activity) and activity-based filtration methods (including analysing multiple sequential activities, outlier activity, interaction attributes and other suspicious activity) are utilised in filtration.
In addition, the following parameters apply to the filtration methodology:
- Third-party filtration is not used by Google.
- Robot instruction files (robots.txt) are employed.
- Sources used for identification of non-human activity: Google uses the IAB/ABCe International Spiders & Robots List, as well additional filters based on past robotic activities. The IAB Robots List exclude file is used.
- Activity-based filtration processes: Activity-based identification involves conducting certain types of pattern analyses, looking for activity behaviour that is likely to identify non-human traffic. Google's Ad Traffic Quality team has systems in place to determine any suspicious activities and does such activity-based filtering appropriately.
- All filtration is performed 'after the fact' and passively. That is, the user (browser, robot, etc.) is provided with their request without indication that their traffic has been flagged, or will otherwise be filtered and removed as Google does not want to provide any indication to the user agent that their activity has triggered any of Google's filtering mechanisms. In some cases front-end blocking is also utilised, when it is likely that the resulting ad request may lead to invalid activity. Historically, less than 2% of ad requests are blocked.
- Processes have been implemented to remove self-announced pre-fetch activity.
- Processes are in place to allow publisher test clicks and video impressions. These processes support publishers adding a specific tag to an ad request to indicate that the ad request is a test request and should not be counted for any billing or official accounting purposes.
- When inconsistencies or mistakes are detected, processes exist to correct this data and provide refunds to advertisers. These refunds are reflected in the billing summaries. The corruption of log files is extremely rare, but in cases where this may occur, processes exist to recover them.
- Processes have been implemented to remove activity from Google-internal IP addresses.
- Filtration rules and thresholds are monitored continuously. They can be changed manually, and are updated automatically on a regular basis.
Business partner qualification
All partners that display Google Ads on their content are required to adhere to our programme policies, which prohibit invalid activity. Learn more about AdSense Programme policies.
Google filters for invalid traffic on an ongoing basis, and will review any business partners that receive high amounts of invalid traffic. Partners who continually receive high amounts of invalid traffic may have their account suspended or closed.
Click data reporting
Google Ads reports the total number of clicks, the total number of impressions and subsets of this data (for example, clicks, impressions and click-through rates, by campaign, ad group and keyword) to advertisers, and similar data corresponding to site statistics to publishers. The scope of the audit process covers the click and advertiser reporting for Google Ads. These figures may fluctuate to an extent during the course of the month and aren't considered finalised until they're frozen at month end. After this time, the reported clicks won't be adjusted. However, credits may be given to advertisers if Google deems it appropriate.
Google Ads includes the capability for advertisers to see the total number of daily clicks filtered (marked invalid) for each campaign. Google Ads doesn’t report general invalid traffic and sophisticated invalid traffic totals separately to prevent this data from being reverse engineered to optimise invalid traffic. Approximately 80% of total invalid click traffic is estimated to be general invalid traffic.
Comprehensive unit test procedures are utilised to ensure the accuracy of reported data in the Google Ads and AdSense front ends. These are the primary mechanisms utilised to ensure that data from back-end databases are conveyed accurately in user-facing reports. In addition, user feedback is carefully monitored to discover and correct any errors which may make it through to a release. Numerous automated systems are in place to ensure the proper operation of all machines and software reporting data to Google Ads users. The content of the reporting, however, is primarily verified through the aforementioned unit tests and user feedback.
Electronic records relating to click activity are retained indefinitely. However, two data fields, IP and cookie IDs, are anonymised after a specified time period (9 months for IP addresses and 18 months for cookie IDs).
You can derive Gross Clicks metrics by reporting on Clicks and Invalid Clicks (GIVT & SIVT). This will allow you to calculate Gross Clicks.
Dashboard metrics presented outside of Search and Display Click campaign totals and device type segmentation are not submitted for MRC accreditation.
Search clicks from Connected TV (CTV), unknown and other device types, which are not MRC accredited, may be commingled with accredited desktop clicks and total clicks metrics on Google Ads reporting dashboards. Traffic from unaccredited device types is estimated to be less than 1% of search click traffic.
Video data reporting
Google Ads reports the total number of video viewable impressions, viewability metrics (outlined below) and subsets of this data to advertisers.
For the purposes of MRC accreditation, only the Google Ads metrics listed in download only video viewability report are in scope. Other reporting of video impression and viewability metrics across the front-end tools are excluded from accreditation, for example, the Campaign reporting front-end.
The metrics in the downloadable video viewability report are reported Total Net of GIVT and SIVT across desktop, mobile web and mobile in-app environments. GIVT metrics are also reported for clicks. Approximately 79% of total invalid video impressions traffic and 18% of total invalid TrueView views is estimated to be general invalid traffic. Due to the nature of TrueView views ad format implementation, the GIVT percentage will be lower. GIVT metrics for Google Ads clicks and video are not MRC accredited. These metrics have been submitted and are pending accreditation for desktop, mobile web and mobile in-app environments.
The following video ad formats are measured and reported in the video viewability report for video ads running on YouTube and Google Display Network. All other ad formats, which aren't described below, are excluded from the video viewability report, including ads served in the YouTube Kids mobile application.
- Skippable in-stream ads: Skippable in-stream ads are 5 seconds or longer and play before, during or after other videos. After 5 seconds, viewers have the option to skip the ad. TrueView views are only available in the skippable in-stream ads format.
- Bumper ads: Bumper ads are 5–6 seconds long and play before, during or after other videos. The viewers won’t have an option to skip the ad.
- Non-skippable in-stream ads: The non-skippable in-stream ads can be up to 15 seconds long and play before, during or after other videos. The viewers won’t have an option to skip the ad.
GVP in-stream ad placements served on mobile app environments are as follows:
- In-stream placements from IMA SDK (e.g. a pre-roll video ad within a Paramount+ stream)
- Rewarded app ads (e.g. a full-screen video ad that rewards user with an extra life on a game)
- App interstitials (e.g. a full-screen video ad that plays before a level on Candy Crush)
Connected TV
CTV devices that are certified to carry YouTube must inform the app when the app is not visible (e.g. user switched HDMI inputs, user turned device off); this ensures that YouTube stops video playback (and by extension, ads are not served) when the app is not visible. In rare instances applicable to YouTube and GVP, Google is not able to determine if a TV device is off. There are no latency measurement limitations.
Machine Learning
Google uses supervised machine learning techniques1 through methods such as Classification (e.g. Neural Network approach), in which the model will predict invalid traffic (IVT) by making a yes/no decision about whether an event is invalid, and Logistic Regression, in which the model scores various activities and then an IVT decision is made based on score thresholds. Supervised machine learning models may also use tree methods and graph methods.
Data sources used for machine learning include logs of queries and interactions ('ads logs'), non-logs data that can be joined with ads logs and a variety of other supplementary proprietary signals. Google relies on hundreds of data sources of varying sizes: the total number of records per data source ranges from thousands to trillions, depending on the data source. Traffic-based models are required to be evaluated with a minimum 7 days of traffic as input data.
For active defenses Google maintains monitoring procedures over the traffic signals (training data) feeding into the models, which trigger alerts for human intervention if certain threshold bounds are not met. As a result minimal, if any, reduced accuracy is expected.
Models are continuously retrained when appropriate and practical, and model performance is regularly or continuously assessed. As a result (similar to our monitoring procedures above) minimal, if any, reduced accuracy is expected.
Biases in machine learning training and evaluation data are minimal and if they are material the IVT defence would not be approved. All machine learning projects ('launches') go through a cross-functional review process before they are approved. As part of this process, bias for the model(s) and corresponding data are evaluated, and projects must meet predetermined ad traffic quality criteria before being approved. Continuous monitoring is in place to detect the emergence of bias in models, which in turn trigger alerts and model evaluation, analysis and updates.
Google applies a mix of machine learning and/or human intervention/review techniques on all traffic. For some defences Google relies on ML-based lead generation followed by human review. Other defences start with human review data and use ML to generalise. Our application of machine learning and human intervention/review techniques is evolving, and our usage shifts according to multiple criteria, including alerts, escalations and organic fluctuations in types of invalid traffic that may emerge. As a result, the distribution is not in steady state, and the 'level' of reliance on either machine learning or human intervention/review fluctuates over time.
1Supervised machine learning relies on labelled input and output data, meaning that there is an expectation for what the output of a machine learning model will be.