How do you know, for real, the money you are spending on marketing is money well spent? How do you know if your ads are actually impacting consumer behavior?
The truth is only a certain type of measurement can answer this question with absolute clarity.
It’s called incrementality testing.
Incrementality testing measures the true, and often hidden, ROI of your advertising spend. Why hidden?
Well, the lines that separate organic traffic and paid conversions are often blurry. It is entirely possible that you are paying to acquire new users that would have converted anyway. Incrementality testing is the best way to uncover this hidden relationship and ultimately know the true value of marketing.
Measuring incrementality is more than simply suspending your paid media activity for a week and analyzing the effect. Running these kinds of tests is a tricky business so the point of this article is to show you how to calculate incrementality and how to interpret its results.
We will also take a look at how it differs from last-click attribution (through post-install optimization), and leave you with an overall understanding of this increasingly important form of measurement.
How is incremental impact determined?
Incrementality tests consist of two groups – test and control. The test group is exposed to ads while the control group is kept aside for analysis.
By measuring the results from each group you know which conversions — whether related to user acquisition or remarketing — would not have happened without advertising. This is known as incremental lift.
For example, in the simplest terms, Marco’s Pizzeria launches a new thick crust pizza and wants to determine how successful their advertising campaign is. After a month of handing out coupons to passersby they measure how many of the new pizzas were purchased using the discount coupon and how many were purchased without it.
The difference in sales between the two groups of clients is the incremental lift.
3 types of incremental effect
Incrementality experiments can deliver a range of results as follows:
In the first example we can see that the experiment led to a positive incremental lift. That means your paid campaign was effective as it generated an increase in revenue.
In the second example there is no incremental lift and it remains neutral. While the campaign is generating sales, it has no incremental value and the marketing team needs to consider pausing the campaign or trying a different approach (change the creative, update targeting etc).
In the final example we see a negative incremental lift. Although it’s rare, it is possible that an advertising campaign is doing more damage than good (for example, over-exposure in a remarketing campaign that leads to negative brand impact). It’s also worth looking into the test itself and make sure that it’s configured correctly.
How does incrementality testing actually work?
First, let’s explore some key terms and metrics.
Below are some definitions of the terminology which surrounds incrementality and will help us understand the process even better.
|Key performance indicators (KPI)||A measurable value that demonstrates how effectively a company of app is achieving their KPI business objectives|
|Control group||A segment of users who will not be exposed to the ads served to the test group in a given campaign audience|
|Test group||A segment of users who will view the ads in a given campaign|
|Statistical significance||A measure of the likelihood that the difference in results between the control and test group is not a coincidence|
|Incremental lift||The percentage difference of the test group from the Control group|
Moving on to the process. An incrementality test is similar to other scientific experiments. You have your hypothesis, method, collection and analysis of results, and your conclusion. With incrementality testing there are five distinct stages; Define, segment, launch, analyze, and take action. Let’s dive a little deeper.
Defining your goals
When starting an incrementality experiment, it is important to define your hypothesis and identify any vital business KPIs that you want to examine further. Think about what you are looking to prove using this scientific method.
For example are you examining the number of installs, ROI, ROAS, or a different metric all together?
Segmenting your audience
When running an incrementality test on a remarketing campaign, select the audience you want to run this experiment with and make sure that you properly segment a section of this audience as a control group.
Pro tip: Your attribution platform will most likely be able to help you segment your audience as you wish and build your campaigns accordingly.
The groups – control and test – should have similar characteristics but not overlap.
This can be tricky when focusing on UA (user acquisition) campaigns as we don’t know the audience in the absence of a unique identifier. To explain, a unique identifier is exactly that. A particular identifier, such as an ID or code, which differentiates it from others, making it unique.
However, there are other identifying factors you can segment your audience with including parameters such as geo, time (similar to the three types of incremental growth above), products, or demographics.
Launch the experiment
Decide the duration of your test and the testing window and launch it.
Best practice states that the duration of your experiment should last at least a week.
The testing window, which is the days of user action preceding the test is dependent on your app’s business cycle and the volumes of data you have to work with.
The test and testing window should be planned for a time when the calendar is clear and this will give the most accurate representation of the effectiveness of your campaign.
Analyze the data
Once you’ve collected all the data from your control and test groups, aggregate and compare to identify the incremental lift in a specific KPI, according to your goals.
Understanding the relationship between the results from the control and test groups, will help explain why there was a positive, negative, or neutral incremental lift.
If you notice a wide gap between your control and test groups it may be an indicator that there is something wrong in the configuration of the experiment and you might choose to retest.
While incrementality testing can be quite challenging to set up on your own there are attribution providers who offer integrated incrementality testing tools. From the test you will be able to ingest all of the test data directly from your attribution platform into an incrementality dashboard — a significant advantage making the process more streamlined and efficient.
Based on the information gained, apply the insights to your campaigns to maximize impact. This could be the best messaging for each target audience, the optimal time for re-engagement, or the most effective media source, to name a few.
Two key methods to measure incrementality
Once you have accumulated and aggregated the data, how do you then go about calculating the incremental lift?
There are two main methods:
1. Incremental profit
Measure lift by pinpointing the incremental profit of the real value of a given media channel. This can be calculated by subtracting the control group profit from the channel profit.
For example, let’s say you spent $2,000 on a campaign. Media Channel A generated $5,000 in profit and Media Channel B generated $3,000. On the surface, these both look like profitable channels. However, your organic campaign also generated $3,000 so the incremental profit on Media Channel B was zero.
|Media Channel A||$2,000||$5,000||$2,000|
|Media Channel B||$2,000||$3,000||$0|
By understanding how much profit you gain through your control group, you see that anything you generate below that value gives you no incremental profit.
Basically, you’d be making the same profits without advertising to them, so save your budget and invest in a channel, activity, media source, campaign, etc. that can deliver more impact.
2. Incremental lift
Use the following formula to calculate incremental lift:
To show how this works we can attach numerical values. So, let’s say that your test group generated 10,000 conversions and the control group resulted in 8000 conversions. So 10,000 minus 8000, divided by 8000 is 0.25.
A 25% incremental lift can be judged as good or bad against your KPIs and ROAS.
For example, if your CPA is $2, divide that by 0.25 which equals $8. If you LTV is higher than $8 you are doing well. If it is lower than that, you may need to reassess your campaign strategy.
Incrementality vs. A/B testing
Now that you’ve got the gist of incrementality testing, you might be wondering, is it actually that different from A/B testing?
The first thing to stress is that incrementality is essentially a type of A/B test. Standard A/B testing divides your product or campaign into two, A and B, and then divides your audience into Audience 1 and Audience 2. Then you apply different versions of the product or campaign to the different audiences and see which delivers the better results.
For example, one audience sees a banner with a blue button, and the other sees the same banner but with a red button. Comparing the banner’s CTR for each audience is a standard A/B test in marketing.
Where A/B testing differs from incrementality is the control group, where one portion of the audience is not served any ads at all during this time.
If we go back to the example above, incrementality will inform you whether running a given ad is better than not running the ad, compared to whether an ad with a blue button outperforms the same ad but with a red button.
How do you not serve ads to an audience, yet still “own” the ad real-estate?
There are three methodologies:
- Intent-to-treat (ITT) – this method calculates the experiment results based on the initial treatment assignment and not on the treatment that was eventually received (meaning you mark each user for test/control in advance and do not rely on attribution data. You have the “intent” to treat them with ads / prevent them from seeing ads, but there’s no guarantee it will happen).
- Ghost ads/bids – this is another example of a randomly split audience, but this time it is done just before the ad is served. The ad is then withheld from the control group, simulating the process of showing the ad to the user, known as ad serving, without paying for placebo ads. This is a tactic mostly used by advertising networks carrying out their own incrementality tests.
- Public service announcements (PSAs) – these are in place to show ads to both the test and control group however, the control group is shown a general PSA while the test group is shown the variant. The behaviors of users in both groups are then compared to calculate incremental lift.
Incrementality vs. ROAS optimization
Incrementality testing is not a replacement for traditional attribution models. It works in unison with attribution to help you better measure your performance.
Note: If you are only measuring installs then this is insufficient when it comes to understanding your ROAS.
Marketers need to be measuring and optimizing based on a range of post-install metrics, and the further down the funnel you go the better. By focusing on LTV and, most importantly, factoring your media costs, you should be able to see if your ROAS is positive.
Incrementality works within this framework and tells you if you could have an even better ROAS by spending less on advertising and still gaining the same revenues from organic users.
The incremental impact on ROAS (aka iROAS) is calculated by taking the difference between your test group revenue and control group revenue and dividing that by the total ad spend. By removing organic conversions from the equation you are able to calculate the true impact of a campaign and optimize accordingly.
For example, if your iROAS is less than 100% you can redistribute budgets to better-performing campaigns and channels. If it is equal to or higher than 100% you know you are not cannibalizing organic traffic and that your ads are effective.
With the prefix of incrementality, marketers have an additional and vital layer of information to optimize ROAS to its full potential. Here lies the difference between simply measuring your ROI / ROAS and seeing the incremental lift or impact of your marketing campaigns on ad spend.
The benefits of incrementality testing
Marketers who use incrementality testing are able to highlight, with confidence, exactly how effective their campaign was. Not only have you identified the impact to your iROAS, but you can apply these insights to future marketing strategies.
For example, incrementality testing will be very useful when testing a new media channel before deciding whether to invest more heavily. You can also use incrementality testing on small media campaigns to see if there was a positive ROAS. If the answer is yes, then you can confidently scale marketing efforts in that channel.
Another example where incrementality testing comes in handy is when it comes time to create a re-engagement strategy. Incrementality testing helps highlight the optimal day, post-install, to re-engage users and to ensure the highest incremental lift from your marketing efforts.
Armed with this knowledge, you as a marketer will be able to make better-informed decisions about which channels are delivering the highest (real) impact and where to invest your marketing budgets.
The challenges of incrementality testing
Of course no method is without its challenges and incrementality is no different.
It is important when creating your control and test groups that you remove any noise or external factors which may impact user behavior. You also need to try and clean the data and make sure there are no overlapping audiences as this may also skew the results.
Deciding on the parameters of your experiment is also challenging.
Each app has different volumes of users and therefore you need to decide the best segment size to test without damaging your existing marketing efforts.
Taking too small a segment will render your results insignificant, so it’s a tradeoff between achieving optimal results that you have confidence in, and the cost of maintaining a lengthy test period.
It’s not always possible to press pause on all of your marketing campaigns for a week, or a month, so in this instance, if you want to see results without spending any more time, it is recommended to close the lowest-performing marketing source and measure there.
Identifying and excluding outliers is another important step as this can skew the data and lead to incorrect conclusions. The volume of data will affect how impactful the outliers will be on the results so, again, it is an important factor when considering the benchmarks for your experiment.
Be aware of seasonality.
Dates in the calendar like Black Friday, Cyber Monday, Easter, and the holiday season will all affect user behavior. Choosing the right time to start running your test is therefore crucial.
Comparing these results to quieter periods will bring up very different results. As an app marketer you can decide the best time frame to run an incrementality test based on your business model and typical user trends.
Lastly, incrementality testing poses some engineering challenges.
These experiments are complicated and require a lot of developers and expertise to create the technology required to deliver the most impactful results.
For example, connecting to each ad networks’ API, receiving and aggregating all of the raw data, removing outliers, and calculating the statistical significance of your results is a ton of hands-on work
Working with an attribution provider to include their incrementality tool will help save time and money. The data is all there in your attribution dashboard, so you can easily segment and aggregate this information into incrementality experiments.
Incrementality is a powerful tool that can give you real insight and confidence in your channel selection, budget allocation, and ROAS measurement, while ensuring your marketing efforts reach their full potential.
To achieve this you should remember to:
- Adopt a holistic approach, focusing on both paid and organic traffic, keeping in mind the complex relationship between them.
- Make sure your data is clean. Remove the noise, the outliers, and the overlapping audiences to ensure your experiment delivers statistically significant results.
- Define your KPIs before building your campaigns and make sure to properly segment your audience.
- Ingest, aggregate, and compare the data to identify the incremental lift of your campaigns.
- Optimize budget allocation and maximize ROAS with a better understanding of which channels are delivering the highest incremental lift, which cohorts are more receptive to advertising, and what time is optimal time to re-engage users.
LTV or ROAS-driven optimization are vital to measure the value of your campaigns, but it is only with the addition of incrementality can you get the ultimate seal of approval on campaign effectiveness.
The final word: Measurement in the age of privacy
Apple’s upcoming enforcement of the ATT framework, as part of iOS 14’s privacy-driven approach, will largely eliminate the ability to measure via device matching.
But since Apple’s SKAdNetwork only captures about 68% of installs driven by non-organic activity, other measurement methods will become increasingly important in order to fill the gap and allow you to make smart, data driven decisions, methods including probabilistic attribution, web-to-app, and, you guessed it — incrementality!