In this article:
- About experiments
- Create your experiment
- Monitor your experiment
- Choose the winner of your experiment
- Understand your experiment results
About experiments
Running an experiment lets you compare one of your ad settings against a variation of that setting to discover which one performs better. Experiments work by splitting your site's traffic between the original ad setting and the variation, so that their performance can be measured side by side.
Experiments help you make informed decisions about how to configure your ad settings, and can help you increase your earnings.
To view your "Experiments" page:
- Sign in to your AdSense account. Click Optimization, then Experiments.
Create your experiment
When you create an experiment you:
- Select the original ad setting that you want to compare the experiment variation against.
- Select which settings you’d like to change for the variation.
- Depending on the experiment type, choose whether you'd like Google to automatically apply the winning setting for you after your experiment has finished.
Tip: Selecting this option can help save you time, especially if you're planning to run lots of experiments.
You can create the following types of experiments:
- Auto ads experiments to discover which of your settings perform best, such as ad formats and ad load.
- Blocking controls experiments to test different ad categories or ad serving settings.
- Search style experiments to improve your revenue by testing out different styles or experimenting with extensions.
Monitor your experiment
On the "Experiments" page, there's an overview of your experiments which shows their current status and progress, and highlights any experiments that are "Result ready" (which is when we recommend you choose a winner).
Status | What it means |
---|---|
Running | Your experiment is in progress and collecting data. |
Result ready | Your experiment has collected sufficient data and is now ready for you to choose a winner. Learn how to choose the winner of an experiment. |
Finished |
One of the following has occurred:
|
Choose the winner of your experiment
When your experiment has collected sufficient data, you can choose the winner of your experiment. We recommend that you wait until your experiment is marked "Result ready" before you choose a setting as the winner.
- If you choose the original as the winner, then your original settings are retained.
- If you choose the variation as the winner, then we apply the settings of the variation to your account.
In either case, we stop splitting your traffic, and your experiment ends.
- If you've opted to let Google choose the winner of your experiment, the best performing settings will be automatically applied for you. For more information, visit: Understand your experiment results.
- If your experiment hasn't collected sufficient data by the time limit (21 days for search style experiments or 90 days for all other experiment types, unless you set a shorter time) we'll automatically stop it. If you've opted to let Google choose the winner, we’ll revert your settings back to the way they were before the experiment started. Otherwise, you'll have another 30 days to choose the winner of the experiment or, in the case of search style experiments, until you edit the style, whichever happens first.
- If an Auto optimize experiment can't confidently determine that a change improves performance within the experiment's time limit, we'll retain your original settings.
If you have further questions about experiments, visit the Experiments FAQ.
Understand your experiment results
Experiments evaluate the performance of the variation using the change in revenue attributed to the variation. In an experiment's results card, you'll find the following metrics:
- The estimated monthly earnings scaled to 100% of your traffic.
Note: This metric is an estimation and doesn't necessarily reflect the amount you'll ultimately be paid.
- The revenue for traffic tested with the original and the variation.
- The percentage of revenue uplift.
- If your experiment shows there's a high probability that either one setting outperforms the other or there's little difference in performance between the two settings, you'll find the probability that the recommended setting is better than the other setting. For example, "80% chance the variation will perform better than the original". Note that this score is likely to be less accurate if you stopped your experiment early.