Run a manual experiment

Experiment using your own criteria to learn how changes may impact your network

A manual experiment is an experiment you define based on your own criteria and schedule. The experiment runs on a percentage of your network’s actual traffic to test how applying those settings would impact revenue. When you run an experiment, the experiment appears on the "Experiments" page.

To see a list of all of the available manual experiment types and view your active experiments, click Optimization, and then Experiments.

Up to 100 active experiments can exist in your Ad Manager network at any given time. Active experiments include experiments that are running, paused, or have completed and are waiting for you to take action.

Get notified about your experiments

You can get notifications about your experiments through email or directly in Ad Manager. To start receiving the notifications:

  1. Sign in to Google Ad Manager.
  2. For email notifications, click Optimization, then Experiments, and then Subscribe for email updates.
  3. For notifications in Ad Manager, click Notifications Notifications, then Settings Settings, and then Experiments notifications.

Manual experiments overview

To run a manual experiment:

  1. Select the experiment type and criteria you want to use.
  2. Define an experiment trial and let the trial run for a specified amount of time.
  3. Edit conditions to automatically pause the experiment, or disable auto-pause for the experiment.
  4. Compare the impression traffic allocated to the "variation" group with the traffic allocated to the "control" group to see which performed better during the experiment.
  5. Run more trials as needed.
  6. Decide whether to apply the experiment settings to your Ad Manager network.
You can also run an experiment from an opportunity suggested by Ad Manager.

Run an experiment

Complete the following steps to run a manual experiment.

Note: The steps differ slightly for each experiment type. For more details, go to Choose a manual experiment type below.  

  1. Sign in to Google Ad Manager.
  2. Click Optimization, and then Experiments.
  3. On the card for the experiment type you want to use, click New experiment.
  4. Name your experiment so you can refer to results more easily.
  5. Enter the settings that are specific to your experiment type.
  6. Under "Experiment period," set a start date and an end date for the experiment trial.
    • Start date: Each trial needs to run for at least 7 days to improve the chance of reaching conclusive results. You can schedule a trial to start immediately, or choose a later date. All trials start at 12:00 AM and end at 11:59 PM on the scheduled dates in your local time zone. Data is refreshed daily. If you set the start date to the current day, the trial will start within the next hour.
    • End date: Each experiment trial can run up to 31 days total. When the trial ends, you can review the results to decide if you want to apply it as an opportunity, run another trial, or end the experiment and keep the original settings.
  7. Under "Traffic allocation," set the percentage of impression traffic to allocate to the experiment.
  8. Under "Auto-pause experiment," set up to 10 auto-pause conditions, selecting Cumulative or Daily for each condition:
    • Cumulative: The amount of revenue loss that will pause an experiment within the duration of the trial.
    • Daily: The amount of revenue loss that will pause an experiment within the last full day of data.
      Note: Auto-pause will check that results meet the conditions specified once per day. To ensure experiments aren’t paused before they have had a chance to collect data, trials won’t be paused within the first 24 hours of starting. To avoid sampling errors, trials will only be paused based on statistically significant results. For example, trials will be paused when the lower bound of a 95% confidence interval meets the necessary threshold.
  9. Click Save.

Best practices for manual experiments​

Sometimes applying changes to your settings may impact the way buyers and other market participants behave, such as changing buying patterns. To get the most benefit out of manual experiments you run and to capture their potential impact on market behavior, we recommend the following best practices.

Ramp up experiments using trials

Experiments applied to lower percentages of traffic are less risky, but they’re also less likely to encourage behavior change from other market participants.

Once performance reaches an acceptable level at a lower percentage of traffic allocation, you can start a second trial using the same settings at a higher traffic allocation to help you get a deeper understanding of the impact of updating all of your network’s traffic to those settings. This is especially true for changes made to pricing, which are more likely to elicit responses from buyers.

It’s important to consider that experiments with lower traffic allocation percentages may not have strong enough results to change behavior. As you ramp up to higher traffic allocations you have a better chance of changing market behavior and measuring the effects of that behavior change.

Run longer experiments

It’s important to run experiments over a long enough period of time for behaviors to change and for the impact of those changes to be measured by the experiment. It often takes 7 or more days for behavior changes to influence revenue.

Consider total revenue in addition to comparisons between different settings

Experiment settings may impact behavior in both the variation group and the control group. You should verify that the expected revenue in the variation group is in line with your expectations when considered against your network as a whole.

Choose a manual experiment type

The experiment type you choose determines the traffic allocation and criteria used to run a manual experiment.

Unblock categories

Unblocking categories allows more advertisers and buyers to compete for your inventory, which increases coverage and helps you maximize your revenue.

Experiment criteria

  • Protection: Select the protection to which you want to apply this experiment.
  • Unblock the following category: Select the category you want to unblock during the experiment.
  • Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style gets the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Unified pricing rules

Unified pricing rule experiments allow publishers to run manual experiments that change the floor price on any unified pricing rule. You can experiment with raising or lowering the floor price of the rule and comparing the results.

Overlapping pricing rules: Unified pricing rule experiments evaluate performance across all ad requests that match the targeting criteria of the rule, including ad requests associated with overlapping targeting on other pricing options. This allows Ad Manager to account for pricing changes that shift the balance of impressions and revenue onto other pricing rules or pricing options.

The CPM displayed is an average of all ad requests that matched the targeting criteria. This means that the CPM may be lower than the floor on some unified pricing rule experiments if a large amount of the traffic that matched the targeting criteria is not subject to the pricing option you selected and is only subject to a lower priced rule.

Example
  • Pricing rule P1 targets ad unit XYZ and has one sub-rule targeting advertiser A123 with a $10 floor
  • Pricing rule P2 targets ad unit XYZ and has one sub-rule for all creatives with a $2 floor

If you run an experiment on P1, then the report would likely show an eCPM below $10. The $10 floor only applies when advertiser A123 wins the auction. However, we consider all traffic where ad unit XYZ was present (which would have a $2 floor due to the second rule).

Experiment criteria

  • Unified pricing rule: Select the unified pricing rule that determines the traffic on which this experiment runs.
  • Pricing option: Select pricing options that are used for the duration of this experiment.
  • Experiment price: The price you want to be applied to traffic in the variation group.
  • Affected remnant line items: The estimated number of remnant line items that are affected by changing the experiment floor price relative to the original price.
  • Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style gets the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Native ad style
Run an A/B test using two sets of native ad styles, including visual elements and other updates. Compare the results to determine which would perform better in your network.

The original style that you want to test against a new design is the “control.” The new design is the “experiment” style. You update the experiment’s settings in an attempt to improve performance compared to the control style. You can then analyze the two styles’ performance and determine which settings you want to keep.

Native experiments can only compare two native styles on existing native placements. You can’t compare banner and native ads in the same ad placement.

If an experiment targets a control native style that mixes both programmatic and traditional traffic, your reservation traffic will be affected.

Experiment criteria

  • Native style: Select the native style with the format and deal eligibility on which you want to run an experiment.
  • Experiment period: The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
  • Add targeting: Select targeting for the experiment. Targeting must match line item targeting to serve successfully.
Formats

You can test new ad formats to understand their impact on your inventory. Currently, the web interstitial and anchor formats are available for testing.

During the experiment, on a percentage of all page loads that match the targeting, the experimental ad unit will be automatically added as a codeless ad unit. It will load the experimental format.

At the end of the experiment, click Apply to automatically apply the changes to your website as a codeless ad unit. Retagging is not required.

Experiment criteria

  • Format: Select the format for which you want to run an experiment.
  • Experimental ad unit: Select the ad unit to be used for trafficking and to understand experimental performance in reporting. Note that the ad unit is not used for targeting, meaning the format will serve to any request that matches the experiment's targeting. We recommend setting up a new ad unit.
  • Target: Define where you want the experiment to run.
  • Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment web interstitials during the experiment. The rest go to other ads. For example, if you allocate 60% of impressions to the experiment web interstitials, other ads get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Header bidding trafficking for Prebid
When you set up header bidding trafficking, you can choose to run an experiment to gauge the impact. For details, visit Enable header bidding for an ad network.
Yield groups
Enable a new yield group to understand the impact of the additional demand on your inventory.

Experiment criteria

  • Yield group settings: Select the yield group to use for the experiment. A yield group must be set as "Inactive" before it can be used in an experiment (the experiment works by activating it). When used in an experiment, the yield group’s status is identified as "Experimenting." When you end the experiment by applying or declining it, the yield group’s status changes to "Active" or "Inactive," accordingly.
  • Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
  • Traffic allocation: The percent of impressions you want to allocate for the yield group to compete on during the experiment. Only the impressions targeted by the yield group are affected. The rest will be filled as if the yield group was inactive. For example, if you allocate 60% of impressions to the experimental yield group, 40% will be unaffected. Keep this allocation in mind when analyzing the experiment results.

User messages

This type of experiment is set up through Privacy & messaging. For details, visit Create a user message experiment.
Video ad rule  (Beta) 

A/B ad rule experiments help you determine the right ad load and ad break structure for your users. Choose a video ad rule, a date range, and the percent of traffic for your experiment, then make adjustments based on the results. 

Configurable settings for this experiment include:

  • Ad load throughout the stream and ad break placement - Make selections for pre-roll, mid-roll, and post-roll.
  • Ad spot level customization for the ad break - Define where specific sponsorships should live in the break.
  • Pod duration, number of ads, and ad duration 
  • Frequency caps 

Learn more about video ad rules.

Required permissions
Video ad rule experiments require the following permissions
  • View ad rules: Lets users view experiment results created on the network.
  • Edit ad rules: Lets users create, apply, and decline the experiments.

Experiment criteria

Basic info

Enter a name for the experiment and make other selections:

  • Video ad rule: Select the video ad rule that the experiment should run on.
    Note: Currently, this experiment type is available for standard video ad rules
  • Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment ad rule during the experiment. The rest go to the original ad rule. For example, if you allocate 60% of impressions to the experiment ad rule, the original ad rule gets the remaining 40%.

    Note: If PPID is available, the experiment will be diverted at the user level, meaning a single user will always see the same ad rule variant. Otherwise, traffic is diverted at the stream level for each VMAP request.

Experiment settings

Under “Experiment settings,” you can review the control and make your selections for the experiment. 

  • Control: The control settings show for your reference. Refer to them as you make your selections for the experiment ad rule. Note that the control settings can’t be changed.

    To view the control settings, click Expand Expand.

  • Experiment: Make your selections for the experiment, and then click Save to start the experiment as scheduled.

Review experiment results 

After your experiment is complete, you can evaluate the results

 

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Google apps
Main menu
6854048608305683263
true
Search Help Center
true
true
true
true
true
148
false
false