Maximize your native performance by A/B testing different native styles. Test visual elements and other updates in two styles to see which one will perform better before making a change.
How it works
The original style that you want to test against a new design is the "control." The new design is the "experiment" style. You update the experiment’s settings in an attempt to improve performance compared to the control style. You can then analyze the two styles’ performance and determine which settings you want to keep.
If an experiment targets a control native style that mixes both programmatic and traditional traffic, your reservation traffic will be affected.
Run an experiment
- Sign in to Google Ad Manager.
- Click Delivery, then Native.
- From the table, click a native style that meets both of these requirements:
- Has a value of "Native content ad," "Native app install ad," "Native video content ad," or "Native video app install ad" in the "Format" column.
- Has a value of "Programmatic & traditional" in the "Deal eligibility" column.
- From the "Style your native ad" page, click Create A/B experiment.
The "Run A/B experiment" settings appear in the right panel. - Select when the experiment will run from the date dropdown.
- Under "Traffic allocation," enter the percentage of impressions to allocate to the experiment style during the experiment. The rest will go to the control style.
- For example, if you allocate 60% of impressions to the experiment style, the control style will get the remaining 40%.
- Enter 50% for an equal allocation of impressions between the experiment and control styles.
- Click Experiment and make updates to the experiment style for the experiment.
Update with the changes you think might make the resulting native ads perform better. - Click Continue, make any needed targeting changes, and click Save and finish.
Analyze your experiment and take action
After the experiment has run for two days, the system should have enough results. You can start to review the data and decide if you want to switch to the experiment settings.
- Click Delivery, then Native.
- From the table, click the native style that is running an experiment.
All such native styles have an "Experiment running" label in their row. - On the "Style your native ad" page that appears, click View experiment on the right side of the page.
- (Optional) If the experiment is still running, pause it by expanding the "Running" dropdown and clicking Pause.
When you pause an experiment, 100% of traffic will go to the control (original) native style. - (Optional) Click Preview styles to see what a resulting ad would look like from each native style.
- Review the data to see how the experiment is performing compared to the control (original) style.
Remember to keep the traffic allocation in mind when analyzing the results. The allocation appears in the lower lefthand corner. - After the experiment ends, all traffic is allocated to the control (original) native style. At any time, you can choose to apply the experiment settings or keep the control (original) style’s settings.
- Apply variation: The control native style is updated to match the experiment style.
- Decline variation: The control (original) native style retains its settings.
Understand experiment results
Experiments display the following metrics along with a "+/-% of control" value, which helps you compare the performance between the experiment and control native styles.
An "Experiment revenue" of "$10,000 / +10.0% of control" means the experiment style is estimated to receive $10,000 in revenue, which is 10% higher than the estimated revenue for the control (original) style.
- Experiment revenue
Net revenue generated from Ad impressions served (with adjustments for Ad Spam and other factors). This amount is an estimate and subject to change when your earnings are verified for accuracy at the end of every month. - Experiment eCPM
Ad revenue per thousand Ad impressions
Ad eCPM = Revenue / Ad impressions * 1000 - Experiment CTR
For standard ads, your ad clickthrough rate (CTR) is the number of ad clicks divided by the number of individual ad impressions expressed as a percentage.
Experiment CTR = Clicks / Ad impressions * 100 - Experiment coverage
The percentage of ads returned compared to the number of ads requested.
Experiment coverage = (Matched requests / Ad requests)