Monitor your experiments

After you’ve started running an experiment, it’s helpful to understand how to monitor its performance. By understanding how your experiment is performing in comparison to the original campaign, you can make an informed decision about whether to end your experiment, apply it to the original campaign, or use it to create a new campaign.

This article explains how to monitor and understand your experiments’ performance.

Instructions

View your experiment’s performance

  1. In your Google Ads account, click the Campaigns icon Campaigns Icon.
  2. Click the Campaigns drop down in the section menu.
  3. Click Experiments.
  4. Find and click the experiment that you want to check performance for.
  5. Review the “Experiment Summary table” and scorecard, then choose to Apply experiment or End experiment.

What the scorecard shows

  • Performance comparison: This shows the dates for which your experiment's performance is being compared to the original campaign's performance. Only full days that fall between your experiment's start and end dates and the date range you've selected for the table below will show. If there is no overlap, the full days between your experiment's start and end dates will be used for "Performance comparison."
  • By default, you’ll notice performance data for Clicks, CTR, Cost, Impressions, and All conversions, but you can select the performance metrics you want to view by clicking the down arrow next to the metric name. You’ll be able to choose from the following:
  • The first line below each metric name shows your experiment’s data for that metric. For example, if you notice 4K below “Clicks”,x that means your experiment’s ads have received 4,000 clicks since it began running.
  • The second line shows an estimated performance difference between the experiment and the campaign.
    • The first value shows the performance difference your experiment saw for that metric when compared to the original campaign. For example, if you notice +10% for Clicks, it’s estimated that your experiment received 10% more clicks than the original campaign. If there’s not enough data available yet for the original campaign and/or the experiment, you’ll notice “‑‑”.
    • The second value shows that if you chose a 95% confidence interval, then, this is the possible range for the performance difference that might exist between the experiment and the original campaign. For example, if you notice [+8%, +12%], it means that there might be anywhere from a 8% to 12% increase in performance for the experiment when compared to the campaign. If there’s not enough data available yet for the original campaign and/or the experiment, you’ll notice “‑‑”. You can pick your own confidence intervals (80% is the default confidence interval) and be able to understand your experiment metrics better with dynamic confidence reporting.
    • If your result is statistically significant, you’ll also find a blue asterisk.

Understanding the metrics

You can now use information in the experiments table to understand the results of your experiment, and take appropriate action. The “Experiments” table contains the following columns:

  • Name: This shows the name of your experiment. You can click on your experiment name to learn more about it, if you want to discover more than what’s available in the table.
  • Type: This shows the type of experiment you’re currently performing (for example, Uplift from Performance Max, Custom display, Video, and many others).
  • Status: This shows the current stage of your experiment (such as “In progress”, “Complete (Applied)”, and “Scheduled”).
  • Results: This shows which arm of the campaign performed best during the experiment duration.
    • Control campaign: It indicates that the control arm performed better than the treatment arm in the experiment.
    • Treatment campaign: This means that the treatment arm performed better than the control arm in the experiment.
    • No clear winner or In progress: This means that either the winner can’t be determined or there’s not enough data, yet. We recommend allowing your experiment to run for 2 to 3 weeks to gather data. If results are still undecided, you may need to increase your budget or allow it to run for longer to get enough data to generate a clear winner.
  • Actions: You can view the recommended action for your experiment (for example, “Apply”).
  • Start date: It shows the start date of your experiment.
  • End date: It shows the end date of your experiment.
  • Metrics: Depending on the goals, experiment type, metrics selected during experiment creation, you can view several metrics in the table, such as Conversions or Conv. value). These represent the percentage of differential achieved by the treatment arm over the control campaign. By hovering over the text in this column, you can view additional information, including the confidence interval.
  • You can select additional metrics by clicking the "Column" icon A picture of the Google Ads columns icon, selecting metrics, and clicking Save. You can also remove columns in the same way.

Tip

Point your cursor over this second line for a more detailed explanation of what you’re reviewing. You'll be able to view the following information:

  • Statistical significance: You’ll find whether your data is statistically significant.
  • Statistically significant: This means that your data is likely not due to chance, and your experiment is more likely to continue performing with similar results if it’s converted to a campaign.
  • Not statistically significant: These are some possible reasons why your data could be considered not statistically significant.In the scorecard, you can change the metric you find using the drop-down next to the metric.
    • Your experiment hasn’t run for long enough.
    • Your campaign doesn’t receive enough traffic.
    • Your traffic split was too small and your experiment isn’t receiving enough traffic.
    • The changes you’ve made haven’t resulted in a statistically significant performance difference.
  • Confidence interval: You’ll also find more details about the confidence interval for the performance difference with an explanation like the following: “There's a 95% chance that your experiment notices a +10% to +20% difference for this metric when compared to the original campaign.”
  • Finally, you’ll find the actual data for that metric for the experiment and the original campaign.

What you can do in the scorecard

  • In the scorecard, you can change the metric you find using the drop-down next to the metric.
  • To review the scorecard for ad group in the experiment, click an ad group from the table below.
  • To view details like the name and budget of a campaign, hover over the corresponding cell in the table.

Understand the time series chart

The time series chart displays the performance of up to 2 metrics in your experiment and shows how they've changed over time in both treated and control campaigns. With this chart, you can compare the effects your experiments have on a particular metric and learn more about how it performs over time.


Apply or end an experiment

To apply an experiment on a campaign or end an experiment for any reason, click the "Apply" or "End" button in the lower right corner of the "Experiment summary" card above the time series chart.

How to auto-apply favorable experiment results (only available for some types of experiments)

This feature is enabled by default. If the results are favorable as compared to the base campaign, it will auto-apply the trial campaign and shift 100% of traffic to the trial campaign. The feature allows you to benefit from the performance improvements of your experiments with little effort.

Note: You can disable the auto-apply feature at any time during your experiment from the "Report" page.

You can create an experiment using recommendation cards on the "Experiments" page. During creation, you can choose to enable this feature. After creating an experiment, a tooltip will be shown on the experiment summary card with the feature status. From this tooltip, you’ll also be able to turn this feature on or off, which will be reflected through your tooltip status.

Additionally, the status column on your "Experiments" page may have one of the following states showing which experiments have been applied:

  • Complete (Not applied)
  • Complete (Applying…)
  • Complete (Applied or Converted)

When your experiment is complete, its tooltip state will update to let you know whether or not your changes were applied.


Additional Information specific to some experiment types

Broad Match experiments

Incremental queries

Broad Match experiments (created through the "Create New Experiment" page or Recommendation cards) now provide deeper insights with incremental queries.

Incremental queries are search terms that matched a broad match keyword in your experiment but weren’t matched by the Google Ads account during the time period of the experiment. These queries have at least one conversion attributed to their click.

Incremental queries can help inform you of the net new traffic matched and converted by broad match keywords in your campaign that had not matched and converted from any other keyword in your account in the duration of your experiment.

You can view up to 5 top incremental queries, if available, in your "Experiments" home page and "Report" page.


Performance Max experiments

What happens when you pause or remove Performance Max experiment campaigns

You can pause or end your control or treatment campaign at any time. To restart a paused experiment, simply use the "Resume" button to resume the campaigns or manually reactivate the campaigns.

  • Pausing campaigns:
    • If you pause either the control or treatment campaign, the experiment will be paused. 100% of the traffic will go to the remaining campaign that's active.
    • If you pause both the control and treatment campaign, the experiment will be paused.
    • Experiment status on end date:
      • If the experiment reaches the end date, the Performance Max syncer will change Performance Max Experiment status to “ended” or “launched” regardless of auto-apply. However, if the experiment result is good and auto-apply is turned on, the auto-apply pipeline will auto graduate the “ended” experiments afterwards.
      • If the experiment reaches the end date and the auto-apply is turned off, you’ll need to apply the changes manually.
  • Removing campaigns:
    • If you remove the control or treatment campaign, 100% of the traffic will go to the remaining campaign that's active.
      • If you only remove the control campaign, the experiment will be launched.
      • If you only remove the treatment campaign, the experiment will end.
    • If you remove both the control and treatment campaign, the experiment will end.
Note: This only applies to Performance Max experiments, including Uplift, Upgrades, and Optimization (Final URL Expansion).
  Campaign status Traffic after user-action Experiment status before end date Experiment status on end date
User action Control Treatment Control Treatment
Pause either campaign Paused Active 0% 100% Paused If auto-apply is enabled and the treatment arm has favorable results, the experiment will be Launched. Otherwise, the experiment will be Ended
Active Paused 100% 0% Paused
Pause both campaigns Paused Paused 0% 0% Paused
Remove either campaign Removed Active 0% 100% Launched
Active Removed 100% 0% Ended
Remove both campaigns Removed Removed 0% 0% Ended

Review or edit a campaign comparable to an existing Performance Max campaign

By default, comparable campaigns are excluded from reporting. Your Report page won’t show comparable Performance Max campaigns unless you enable comparable campaign reporting in Google Ads.

During the experiment, you’ll need to manually edit comparable campaigns, otherwise Google will choose the comparable campaigns for you. After the experiment has ended, you’ll be unable to add or remove your comparable campaigns.

  • View results alongside comparable campaigns: To determine how your experiment performs compared to similar campaigns, enable the "View Results with Comparable Campaigns" toggle in Google Ads reporting.
  • Edit comparable campaigns (until experiment ends): You have complete control over which campaigns are considered "comparable" throughout the experiment. Simply edit your selections within your Performance Max experiments. If you don't manually choose comparable campaigns, Google will automatically select them for you. Once the experiment ends, you won't be able to add or remove comparable campaigns from the results.
Important: It will take one day after the experiment starts to populate the list of comparable campaigns that were automatically chosen for the experiment. After the list is populated, you’ll get an option to edit the comparable campaign list, and add or remove any campaigns from your experiment. You can make these changes until the experiment concludes. Learn more about comparable campaigns.

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Google apps
Main menu
4858926248084080085
true
Search Help Center
true
true
true
true
true
73067
false
false
false