Methods for Assessing Mobile Marketing Profitability with Excel
If you think you’ve entered into the realm of advanced Excel by using pivot tables, calculated fields, conditional formatting, and vlookups, then you may be surprised to know that you’re missing out on an even more powerful trick in the Excel playbook. Even better, this trick can be used to predict your mobile marketing campaigns’ profitability!
By using a scatter plot and bit of algebra, you can turn an Excel trendline equation into a powerful tool for identifying early on the point at which your marketing campaigns prove they are likely to turn a profit. This method can help you graduate from hunches to data-driven decision making and raise your confidence in weekly reporting.
Predicting Which Week 0 ROAS Predicts 100% ROAS at 6 Months
While LTV done right is a great predictor, ROAS (particularly in the first week of a user’s lifetime) is a widely used metric for measuring profit due to its broad accessibility, so let us continue using ROAS as our confident predictor.
In particular, we will use Week 0 ROAS (revenue in the first week of acquiring users/cost to acquire those users) as our confident predictor, which is a cohorted, apples-to-apples method of benchmarking ad performance each week. We’ll use Week 0 ROAS to predict whether we break even on our ad spend with 100% ROAS after 6 months.
The first step to using Excel for predicting profit is to ensure you have enough Week 0 and 6-month data points. While you technically can draw a slope and make a prediction for any point on that slope with two data points, your prediction won’t be very good with so few observations powering it.
The ideal number of observations depends on a multitude of factors, such as your desired confidence level, the correlations in the dataset, and time constraints, but as a rule of thumb for Week 0 ROAS-based predictions, you should shoot for at least 60 pairs of Week 0 and 6-month ROAS observations.
Moreover, it’s also vital to include enough observations that have reached the goal level you set. If you have 60 data points to plot, but only 2 points where 6-month ROAS crossed 100%, then your equation model won’t be powered by enough of an understanding of what inputs are required to reach this breakeven point. In this case, for all your model knows, the requirement to get to 100% ROAS after 6 months could be another 2 full ROAS percentage points or 5 percentage points, which is a very wide range and not conducive to predicting.
Once you have gathered enough observations of the goal level, the second step is to split your data set into two groups, one for training and one for prediction. Place the lion’s share of data (~80%) in the training group; later on, you will use the prediction group to actually test your model’s accuracy of predicting the 6-month ROAS, given the Week 0 ROAS.
The third step is to use a scatter plot to graph the data, with the Week 0 ROAS on the x-axis and 6-month ROAS on the y-axis. Add a trendline and add the equation and R-squared settings.
Graph the training data using a scatter plot.
Right click on a data point and add a trendline.
Add the trendline equation and R-squared.
Step four involves using the y = mx + b linear equation to solve for the equation’s x value (Week 0 ROAS) when the y value (6-month ROAS) is 100%.
Rearranging the equation using algebra is done as follows:
- y = 9.2695x – .0936
- 1 = 9.2695x – .0936
- 1 + .0936 = 9.2695x
- 1.0936 = 9.269x
- X = 1.0936 / 9.269
- X = 11.8%
Thus, we calculate that the answer to the question of how to predict profit at 6-month is that your ROAS must be greater than 11.8% in the first week.
If your Week 0 ROAS comes in under this number, you know that you will need to adjust bids, creatives, targeting, or something else to improve the cost or quality of users acquired, or else improve your monetization trends. If your Week 0 ROAS is over this number, then you can feel confident in raising budgets and bids!
Step five is where you use your prediction segment of the full data set to assess how well your model was able to predict actual outcomes. This can be assessed using the mean absolute percentage error (MAPE), which is a calculation that divides the absolute value of the error (the actual value minus the predicted value) by the actual value. The lower the sum of the MAPE, the better the predictive power of your model.
There is no rule of thumb for a good MAPE number, but generally, the more data your model has and the more correlated the data is, the lower the MAPE and the better your model’s prediction power will be. If your MAPE is large and the error rates are unacceptable, it may be necessary to use a more complex model. While more difficult to manage, more complex models (especially involving R and python) can increase the prediction power of your analysis.
And there you have it: a framework for predicting marketing campaign profitability.
But don’t stop reading yet! This guide has more goodness to come.
Improve Your Predictions
For the curious readers out there, the question on your minds may be whether the default, linear trendline is the best to use for predicting profit. You may even try out a few more trendlines and discover that the R-squared (a measure of the fit of the equation to your data) improves with other equations, raising the profile of this question even more.
While the marketing adage of “it depends” applies again in selecting the best trendline, another marketing adage is useful as a response: KISS (keep it simple, stupid). If you are not a statistician or a math enthusiast, your best bet is to use the simpler trendlines (linear).
Why is this an issue? As a simple example, consider the addition of unexpected data into the model. In the following two scenarios, see how a lower Week 0 ROAS maturing unexpectedly well, or a higher Week 0 ROAS maturing unexpectedly poorly affects each trendline model’s accuracy (assessed using the MAPE).
Using the MAPE to compare the different trendline-based models here shows that, while the linear and exponential models are not the most accurate in any case, they are the most consistent.
As a final note, check out this list of other questions that can prove useful for ensuring your prediction analysis is formed on solid ground:
- Did you continue feeding your model to keep it trained on the most relevant data?
- Did you check to see whether your model’s predictions come to fruition based on new observations, or close to it?
- Do you have too much variation or, conversely, overfit?
- A very low R-squared or a very high R-squared are indicative of a problem in your model’s ability to predict new data accurately.
- Did you use the right KPI?
- Go ahead and test different KPIs (e.g. more or fewer days of ROAS or LTV) and use the MAPE to compare the profit prediction power of each! You may be surprised at how poorly correlated the standard measures prove to be.
- Did your leading indicators or early benchmarks experience significant change?
- This can be a sign that something significant has shifted in the real world, and that trouble is brewing for your model’s ability to predict profit accurately moving forward.
- Did you apply segmentation to the data?
- Segmenting users into more homogenous groups is one great way to reduce noise and improve the predictive power of your model. For example, don’t apply the same model to all users across all channels and geographies if those users have significantly different retention and cost trends.
- Are you considering the influences of time?
Most marketers are aware of the influences of seasonality being a reason for which predictions can break down, but the lifecycle of your app/campaign/audience/creative can also influence the ability of your model to make accurate predictions.