Brand Lift uses data from surveys to measure how your ads influence people. You can set up Brand Lift to show surveys to people about your product or brand.
In order to accurately detect your lift, a certain number of survey responses is required.
How Display & Video 360 measures your Brand Lift
Display & Video 360 can narrow down how much lift your Brand Lift metric generated based on the amount of positive survey responses between people who have seen your ads and those of people who were withheld from seeing your ads. Generally, more responses are required in order to accurately detect smaller amounts of absolute lift. Before your lift is detected, you will be able to see an estimation of it based on your response count.
When to expect detectable lift
View the following guidelines about how many responses are required to detect your lift.
- For high-performing line items, you can expect to detect lift once you receive about 2,000 responses per lift metric.
- At the recommended budget minimum, you can expect to detect lift once you receive 5,600 responses per lift metric.
- If your line item has not shown any lift after reaching 16,800 responses per metric, you may not be able to detect your lift.
Required total responses for measuring Brand Lift
In order to measure Brand Lift accurately at various levels, the total response count must be within a certain range. The smaller the absolute lift, the more survey responses are required to ensure accuracy. The table below shows the required total response, given a detectable absolute lift:
Detectable absolute lift | Required total response count |
---|---|
> 4% | 1,200 ~ 2,800 |
3% | 2,800 ~ 5,000 |
2% | 5,000 ~ 11,000 |
1.5% | 11,000 ~ 20,000 |
1% | 20,000 ~ 45,000 |
0.5% | 45,000 ~ 180,000 |
< 0.5% | > 180,000 |
Example
For detectable absolute lift percentages not mentioned in the chart, you may need to estimate to find the total required response count.
Let's say you have .75% absolute lift and want to know the number of responses you need to detect the absolute lift. 45,000 responses would be more than what you need (since the minimum requirement to detect .5% absolute lift is 45,000 responses), while 20,000 responses wouldn't be enough (since the minimum requirement to detect 1% absolute lift is 20,000 responses).
Since .75% is halfway between 1% and .5%, you would need roughly between 20,000 and 45,000 responses to get .75% detectable absolute lift (or about 33,000 survey responses).
If your Brand Lift metric's absolute lift approaches 0, more survey responses are required to accurately measure absolute lift. This is because if there's only a small difference between the responses of people who have seen your ads and those of people who have not seen your ads, more responses are required to determine exactly what difference there is.
Brand Lift metrics
Absolute brand lift
This metric shows the difference in positive responses to brand or product surveys between the group of people who saw your ads (the exposed group) and the group withheld from seeing your ads (the baseline group). This metric is calculated by subtracting the positive response rate of the baseline group from the exposed group. Absolute brand lift measures how much your ads influenced your audience's positive feelings towards your brand or product. For example, an increase from 20% to 40% in the positive survey responses between the two surveyed groups represents an absolute lift of 20%.
Absolute brand lift and insertion order performance
Absolute lift doesn't necessarily reflect your overall brand lift performance. It is better to focus on a metric like cost-per-lifted user as the primary success metric of your insertion order, because it factors in both reach and cost. See the following table:
Insertion Order | Cost | Cost per 1,000 impressions (CPM) | Reach | Absolute lift | Lifted users | Cost-per-lifted user |
---|---|---|---|---|---|---|
Insertion Order 1 | $100 | $15 | 6,666 | 10% | 667 | $0.15 |
Insertion Order 2 | $100 | $5 | 20,000 | 5% | 1,000 | $0.10 |
Difference | n/a | 66% | 200% | 50% | 60% | 33% |
If you consider absolute lift only, Insertion Order 1 appears to perform better than Insertion Order 2. However, at the same cost, Insertion Order 2 drove 50% more lifted users, at a 66% lower CPM, and with a 33% more efficient cost-per-lifted user.
Lifted users
This shows the estimated number of users in a sample survey whose perception of your brand changed as a result of your ads, extended to the overall reach of the campaign. It shows the difference in positive responses to your brand or product surveys between the group of users who saw your ad and the group who didn’t. For example, your ads could result in a lift in consideration (or awareness, or ad recall) with regard to your brand or product after seeing your ads.
The “lifted users” metric doesn’t necessarily measure unique users. A user may become lifted more than once during the course of your campaign.
Lifted users (co-viewed)
The estimated number of users whose perception of your brand changed as a result of your ads, including lifted users from co-viewed impressions on CTV devices.Learn more about co-viewing.
Cost per lifted user
This shows the average cost for a lifted user who's now thinking about your brand after seeing your ads. Cost per lifted user is measured by dividing the total cost of your campaign by the number of lifted users. You can use this metric to understand the cost to change someone’s mind about your brand in terms of brand consideration, ad recall, or brand awareness.
Headroom lift
The impact your ads had on increasing positive feelings towards your brand or product compared to the positive growth potential your brand or product could have gotten. This metric is calculated by dividing absolute lift by 1 minus the positive response rate of the baseline group. For example, an increase from 20% to 40% in the positive survey responses between the exposed group and the baseline groups represents a headroom lift of 25%.
Relative brand lift
The difference in positive responses to brand or product surveys between users who saw your ads, versus users who were withheld from seeing your ads. This difference is then divided by the number of positive responses from the group of users who didn't see your ads. The result measures how much your ads influenced your audience's positive perception of your brand. For example, an increase from 20% to 40% in the positive survey responses between the two surveyed groups represents a relative lift of 100%.
Since survey responses can't be collected for the entire exposed and the baseline groups, this data is extrapolated from the responses that have been collected, which gives you an estimated number within a certain range. Usually, the confidence interval is 90%, so you can expect that in 90% of the cases, the true lift number will be within that range (if you were to have reached everyone).
Baseline positive response rate
How often users who were withheld from seeing your ads responded positively to your brand. Use this metric to better understand how positive responses to your brand were influenced by general media exposure and other factors, not by seeing the ads.
Exposed survey responses
The number of survey responses from people who saw your ads.
Baseline survey responses
The number of survey responses from people who were withheld from seeing your ads.
Exposed positive response rate
This defines how often users who saw your ads responded positively to your brand.
Positive response rate
Out of all the people who responded to the survey, this is the percentage of people who responded with a positive answer in regards to your product or brand.
Confidence interval
This is the estimated range in which your relative brand lift and absolute lift estimates fall. For example, you may see your relative lift is 38.41%, the point estimate. In brackets you will see the confidence interval from at least 30.5% to at most 45.0%.
Status
Here are the meanings of statuses within your study and Brand Lift reporting.
“X% Lift”
An X% lift indicates that we've detected high enough lift based on the number of responses we received to generate a report. For example, a 5% increase in the Absolute brand lift column indicates that your ads influenced your audience's positive feelings towards your brand or product by +5%. Learn more about the different Brand Lift metrics.
“Not enough data”
“Not enough data” means that based on the date range you’ve selected in your account, the number of Brand Lift survey responses received in that date range is below the minimum threshold required to surface results.
Fix “not enough data”
There could be multiple reasons for not getting enough data for your study or an individual slice. To fix it, make sure that:
- You spend your budget in full.
- The actual spend in your campaigns meets the minimums, and not just the budget.
If your campaigns are spending enough but are still not getting Brand Lift results, check for the following:
Is your CPV bid too low?
Low traffic can indicate that you're getting outbid. Raise your bid to win more impressions and generate traffic. However, keep in mind that, if you raise your bid, you’ll spend your budget faster (assuming those impressions lead to views). When you use your budget faster, you’ll have less unique viewers and less potential for more viewers to fill out a survey.
Recommendation: If your traffic is low despite broad targeting, consider raising your bid. If raising your bid means you are hitting your budget cap, consider raising your budget to accommodate the higher bid.
Is your campaign configuration negatively affecting the survey control group?
If a Brand Lift study uses campaigns that target audiences that viewed the ad video before, it currently does not build a control group.
For example, let’s say you create a study with Video A. Next, you create a second study in which you target a YouTube list of “Viewers who watched Video A as an ad”. With this setup, you won’t be able to build a control group. You may have progress, but it will only be on the exposed side, so you can't expect results to post.
Another example would be Brand lift study that uses campaigns that targets audiences who saw the first ad of a video ad sequence (VAS) campaigns. With VAS campaign subtypes you can create sequences of ads you want users to view in a certain order (i.e. ‘Show users Ad A then Ad B then Ad C’). Let’s say you create a campaign to add to your brand lift study that targets an audience list ‘Viewers who watched Video B as an Ad’ and then use Ad C as your creative. Because all users that saw Ad C would have had to have seen Ad B first, this means your ‘control group’ will be primarily composed of users that have already seen your ad within the VAS campaign.
Such configurations mean the study can’t build a control group because targeted users who will view your ad have already seen it. If only viewers who are eligible to be entered into the study are going to be blocked, your control group won’t progress. In this case, you shouldn’t expect results to post.
Is the campaign targeting too narrow?
The following study and campaign setup configurations may sometimes reduce the number of survey responses that your study will be able to gather. The extent to which they slow down survey response collection varies depending on the degree to which they’re narrowing your targeting reach.
Audiences (particularly retargeting), placements, keywords, and topics
More restrictive targeting types, such as placements, keywords, and retargeting, reduce the number of eligible viewers and can lead to less impressions. Less impressions and less viewers in turn mean that there's less potential for viewers to fill out a survey.
Small geography
Too small of a geography might limit unique viewers, which reduces your odds of getting enough responses. Ideally, studies are run at the country level, but you can also target smaller geographies, as long as there’s a large enough population of viewers.
Recommendation: Monitor your traffic closely as the study is progressing. If you aren’t spending in full, broaden any overly restrictive targeting by expanding geography or removing overly restrictive targeting types like placements or keywords.
Are you issuing surveys that might have a low response rate by showing non-English surveys in all languages?
Your survey can only serve in one language. If you target multiple or “All languages”, you’re serving your survey to viewers who don’t speak that language. These viewers are likely to dismiss your survey. Thus, targeting multiple or “All languages'' isn't recommended, as this could lead to a negative experience for many viewers. If your survey is in English, depending on the country, you can target “All languages”, because English is a commonly spoken second language in many countries. Note that even in this case, it isn’t a recommended practice.
Recommendation: In your campaign targeting, have the language you target match the language of the survey. Avoid targeting multiple geographies that speak different languages unless you know there's a high number of bilingual users or if your survey is in English, which tends to be the most common second language of bilingual speakers.
Are there too many campaigns (or Video experiment arms) in the Lift Measurement Configuration (LMC)?
Too many campaigns (or Video experiment arms) in the LMC result in lower impressions per campaign/Video experiment arm. Use of Video experiments with many experiment arms may result in “Not Enough Data” at the campaign level if your campaign traffic isn’t large enough for each experiment arm.
Recommendation: If campaign level data is important to you, be conscientious of the number of experiment arms/campaigns within an arm that you add.
Additionally, inclusion of many campaigns in the same study (especially with overlapping targeting) may result in “Not Enough Data” at the campaign level. If you add more campaigns, this means that, at the campaign level, you need enough responses per campaign or reporting slice (for example, device, demo, or ad). If that level of reporting is a priority, this is something to keep in mind when you think about how many campaigns to add to the same study.
Avoid adding lots of campaigns to your study if campaign level reporting is a priority for you. If it is, consider running multiple studies with one campaign per study or use video experiments to ensure you don’t have cross contamination across studies.
For reach-focused campaigns, are you issuing surveys that might have a low response rate by showing multiple surveys to the same viewer?
“No lift detected”
Sometimes a study that has ended with enough survey responses will still show “No lift detected”. This happens when there was no statistically significant difference between the survey responses from viewers who watched your video ad and those who didn’t. If you don't have lift at the study level, check if you have lift in specific segments (for example, age, gender, campaign or device). Consider focusing on those segments with positive lift.
As with any media channel, some metrics are more difficult to move than others. Some audiences are more difficult to reach than others. It’s normal for video campaigns to have no lift on certain metrics and audiences.
Below are a few things you can do to improve your campaign’s set up, creative or targeting to increase the chances of seeing lift.
Set up your study correctly
- Select your competitor answer choices carefully
- Mismatch in the competitor’s brand or product compared to your brand or product might lead them to be selected more than you. For example, if you’re a small beverage company and choose a globally recognized soda brand as your competitor in the brand lift survey answer choices, viewers might choose them more, resulting in no lift for your brand.
- Ensure you entered your brand or product as the “Preferred Answer”
- If you didn’t enter the advertised brand or product as the “Preferred answer”, the study ran with the wrong parameter. You can make edits and use Re-measurement to re-enable your study with the correct brand or product and competitors.
- If the creative is focused on a product, choose the right product category
- If your creative focused on a specific product, and you measured the impact on the brand, you’ll likely have “No lift detected”. Unfortunately, the study ran with too large a scope. You should wait for the next campaign to measure its effectiveness. You’ve learned that this creative is too product specific to move the overall brand.
Improve your creative
Quality of the creative plays a huge role in getting lift. Check if your ad is following the ABCDs of effective YouTube creative. Contact your account manager for detailed guidance on improving your creative.
For light-branded ads, if your brand or product name isn’t present, appears late in the ad, or is too subtle, the audience won’t attribute the creative back to the brand or product advertised. To correct this, consider adding branding, like an icon, watermark, or banner, earlier in the ad. You can also change the script to integrate the brand or product more clearly.
To lift lower funnel metrics, such as conversions, adding the branding early probably won’t be enough to cause a significant lift. The creative needs to be more persuasive. Consider moving the main argument to earlier in the creative, or include more arguments in the ad script.
Limit exposure to your creative outside the lift study
If a creative has been seen by viewers before the Brand Lift study has been launched, it’s possible that the control group (group that does not view the ad) has been contaminated, resulting in no lift. This will make them the control group respond similarly to how your exposed users respond, thus reducing ‘lift’. To minimize creative contamination:
- Avoid running YouTube Video campaigns with non-Youtube channels like TV and other ad platforms.
- Avoid multiple brand lift studies with the same or similar creative (unless using Video Experiments)
- Avoid leaving out other video campaigns from your brand lift study with a similar creative