App uplift experiment best practices

This article discusses best practices for app uplift experiments.

On this page

 


Before creating a new experiment

Understand how app uplift experiments solve your use cases

What is an app uplift experiment? App uplift experiments let you experiment and understand the performance uplift of adding video assets to your existing campaign.

  • We recommend the following approaches when using app uplift experiments based on your use cases:

    • Try Video for the first time: If you don’t currently have video in your campaign, app uplift experiments can help you understand the performance uplift of adding video assets
    • Pick the winner among multiple video assets with directional results: If you have multiple video assets, app uplift experiments can help you understand:
      • If all the video assets collectively help you improve performance
      • How each of the video assets contribute to the overall performance uplift directionally

Minimum budget and bid

We recommend a budget and bid enabling the campaign to get at least 100 (ideally 150+) conversions per day to ensure that our models can optimise your campaigns. Smart Bidding simulators help you to better understand how many conversions you'll likely get when you change your budget and bid strategy target.

  • The higher the daily number of conversions in the experiment, the faster the experiment will reach statistically significant results.
  • If your base campaign contains a high number of existing video assets (> ~50), the budget required to daily assess each asset is likely much higher.

Campaign bid strategy target (tCPI/tCPE/tROAS)

If your campaign is budget constrained, ensure that your actual CPI or CPE is not more than two times lower than your target CPI or CPE (and similarly for tROAS). This will help ensure that we don’t experience unexpected behaviour with cold start/bid lowering.

In general, campaigns that aren’t constrained by budget or bids will achieve quicker and more accurate results.

Check existing video assets

If the campaign is budget constrained

  • If your current campaign doesn’t have any videos or has videos but isn’t spending, testing the addition of new videos is unlikely to bring a performance uplift.
  • Consider increasing the budget of the campaign until it’s no longer constrained and then evaluate the need for an uplift experiment.

If the campaign isn’t budget constrained

  • If your current campaign has video assets but they account for a low percentage of the campaign’s total spend, testing the addition of new video assets is unlikely to bring a performance uplift.
  • Consider raising your target cost per conversion (or decreasing your tROAS) until your existing video assets reach a meaningful amount of spend then evaluate the need for an uplift experiment.

 


Experiment setup

Experiment goals

  • Prefer selecting experiment metrics inline with your campaign optimisation goals.
    • For example, pick Install volume or CPI if your campaign is optimising for installs.
  • Prefer cost per action (install / in-app action) over conversion volume metrics unless your campaign isn’t constrained by budget.

Experiment split

  • We recommend using 50/50 traffic and budget split in most situations to get to the fastest experiment results possible with lowest cost.
  • In certain situations, for example, you think the assets that you're testing will generate a large negative impact, it may make sense to use a different traffic split (for example, 40% in trial campaign, 60% in base campaign)

Confidence level

  • We recommend using an 80% confidence level which generally provides good accuracy on experiment results with shorter duration and less cost compared with 85% or 95% confidence level.
  • If you’re unsure about the desired confidence level to pick for your experiment, you can use the table in the appendix to find the number of conversions that you'd need in order to reach a given confidence level.

Experiment dates

  • We recommend running experiments for 30 days, if possible, to maximise the possibility for a conclusive experiment results

Experiment Health Check

  • Health Check provides a series of diagnostics and checks to improve the probability of conclusive experiment results. We recommend that you fix severe issues (in red) such as using an iOS app (currently not supported) and try to fix moderate issues with best effort (in yellow) such as budget constraints. Learn more About app uplift experiment creation Health Check.

General recommendations

Interactions with other campaigns promoting the same app

  • Ensure that the account doesn’t have another campaign promoting the same app in the same geolocations as the campaign being tested to avoid campaign cannibalisation.

Policy violations

  • Fix eventual policy violations that you might have in your campaign (when possible) as these could prevent one of the campaigns in your experiment from running or could delay the results.

 


While the experiment is running

Budget and performance target changes

  • We recommend not updating these settings for the first seven days of the experiment.
  • If changes are required following that period, prefer small daily incremental changes over a large one at once.

Asset changes

  • If you need to make a change to an asset in your base campaign, ensure that you do the same change in the corresponding treatment campaign at the same time.

Monitoring experiments

  • We recommend excluding the first five to 10 days of the experiment from the results (using the date selector) to avoid having the campaign learning period influence the metrics.
  • You have the option to monitor experiment results using the three confidence levels (80%, 85%, 95%).
  • If you added multiple video assets in the trial campaign, you can view the performance of an individual video asset in Google Ads reporting.

 


When the experiment end

Interpreting experiment results

  • Statistically significant results
    • Positive results on both experiment goals: We recommend that you promote the asset to your base campaign and potentially other campaigns in your account where applicable (for example, campaigns with similar goals but in a different geo) to improve your overall performance.
    • Negative results on both experiment goals: We recommend that you do not promote the asset to your campaign or account. 
    • Mix of positive and negative results on experiment goals: We recommend that you make decisions based on your business needs and ROI constraints. For example, if the CPI increases by 5%, installs increase by 10%, advertisers should promote the assets if they're comfortable with more installs at a higher average CPI.
  • Non statistically significant results
    • We recommend that you make decisions based on your business needs and risk tolerance. For example, for advertisers comfortable with directional results, promoting assets with positive but non-statistically significant results is reasonable. Alternatively, we recommend making changes to the asset and conducting another experiment.

Was this helpful?

How can we improve it?
true
Achieve your advertising goals today!

Attend our Performance Max Masterclass, a livestream workshop session bringing together industry and Google ads PMax experts.

Register now

Search
Clear search
Close search
Google apps
Main menu
12695019556322867140
true
Search Help Centre
true
true
true
true
true
73067
false
false
false