Facebook Creative Testing Methodology [MAMA Board]
Welcome to the nineteenth edition of MAMA Boards, an AppsFlyer video project featuring leading mobile marketing experts on camera. For today’s mini whiteboard master class, we have Kira Tovarovsky, Marketing Team Leader at Playtika, one of the leading gaming companies with multiple games in the top grossing charts in the US and worldwide.
You probably already know the value of testing through building your marketing campaigns, but are you taking advantage of Facebook’s full capabilities to drill down on the creative level? In this MAMA Board, Kira discusses the 4 easy steps to nailing Facebook creative testing from brainstorm to launch, and later gives 6 pro tips for optimizing these assets to perfection. Ready to leverage your creatives like a pro? Of course you are.
Real experts, real growth. That’s our motto.
Hi, everyone, and welcome to another edition of MAMA Boards by AppsFlyer. My name is Kira and I’m a User Acquisition Team Leader in Playtika, one of the mobile leaders in the gaming industry.
Why is setting a Facebook creative testing methodology important?
So we are here to discuss about creative testing methodology on Facebook. We’ll start, however, by highlighting why testing is important. Testing is important because we can get a clear understanding regarding what’s going to work and what won’t. Testing on Facebook is easy because we can take one audience, and easily split it into two groups. It’s the same audience, but we show it different concepts, different creatives, different everything, to get clearer performance insights.
The creative is extremely important for a media buyer because it’s their optimization. It’s important for the optimization process and using it is the best way to make high-performing combinations between the audiences and creatives.
Some of the reasons why we would use a particular creative methodology are, first of all, to clarify the performance.
Second, we can test different concepts, different features, different backgrounds, and everything. For example, we can take a video and cut different backgrounds and then we can understand if one of the backgrounds is better performing or if, simply, the concept doesn’t work.
Third, we can get fast results because we are using one optimization goal for app installs. In the past, we were using multiple optimization goals for app event optimization, like FTDs, revenue, purchases, and it was taking us time to get the clear understanding, plus we spent lots of money on it. Eventually, we decided that we wanted a tool to help us get clear results, and fast, without being dependent on the Facebook algorithm.
We all have confidence in the Facebook algorithm that it will accurately evaluate creative performance, and subsequently promote the top performing one. Sometimes, however, you might see some ads that didn’t get any impressions, any spend, anything. It doesn’t mean that the creative wasn’t good. It just means that we were dependent in the Facebook algorithm. So, to cut a long story short, we are going to discuss something today that will help to manipulate the Facebook algorithm and make our lives easier.
How do we establish and implement the creative testing methodology?
How do we actually do it? First, we need to brainstorm for a concept. What does that mean? First, we sit with the creative team and think about different concepts, different backgrounds, something new, icons that we can add, features in the creative, something that’s going to be new and different.
Second, the creative team designs what we discussed and prepares several sizes or several concepts that we will be able to test.
Next, we are ready to prefer to prepare our Facebook test. What do we do? First, we create two separate ad sets – this is related to what we discussed before about why we shouldn’t be dependent on the Facebook algorithm. In the first set, we will put our top creative, chosen based on what we discussed qualifies as a top creative. In the second ad set, we’re going to put the new creative, that we just created after brainstorming with the creative team.
Finally, for each one of the ad sets, we are going to allocate a $200 budget as a starting point and start preparing the test and gathering results.
So how do we decide what’s a good creative? We need to define benchmarks to determine this.
What benchmarks can we set for creative testing?
First of all, we would like our top creative to have a good CTR, a good click-through rate, which reflects the rate at which people tend to click on your ad. For a top creative, we’d expect it to be higher than the average.
Second, we’d expect to have a CVR, or conversion rate, equal to the average. CVR is the relationship between a good audience and different optimizations goals. Since we have, in this case, decided to take our top performing audience and use it with only one optimization goal, which is app install, we’d expect to be the average.
Third, if we meet these two requirements, then we’d expect the CPI to be lower than the average or equals to the average. It actually depends, because people tend to click on the ad and download the app, so if the audience is good, it seems that the CPI should be as well.
Finally, we want some meat in the creative, so we are going to use only creatives that drive more than a thousand installations because I don’t want to reach a conclusion based on 500 installs or 100 or even two installs. Those kinds of numbers simply can’t define what is a good creative or not.
On the other hand, not every creative is going to be our top performing one, so we would also like to have some benchmarks for average creatives. Usually, we take the last seven days of performance because we want to have the most accurate stats. If something happens in the algorithm or in the application itself, I want to have the latest results and to understand that the different creative is going work or not in relation to these benchmarks. Therefore, to understand the average benchmarks, we will look only at the last seven days to get average CTR, CPI, and conversion rates.
How do we analyze our results?
After we’ve calculated our benchmarks and created the test, we’re ready to analyze the results. Note this is also dependent on collecting our data and everything from multiple time samples.
What’s the bottom line of our analysis? We are checking if the new creative is better than the top creative. When that’s the case, we are going to push it in all of our campaigns, which are being optimized for revenue, because we know already that the creative works.
If the top creative is not working, we are going to check if it passes in comparison to the average benchmarks. If it stands in the average, this means that the creative should be okay – it’s not the top one, but we’re still going to push it because not every creative is going to be the best one.
However, if the creative is not standing in the average benchmarks for the seven day period, we’re not going to push it anymore, and we’re going to cut it from the campaigns because we don’t want to lose money. The bottom line is that we understand that something is not working, and we therefore need to decide on a new concept.
So, since we’ve finished discussing collecting results and everything, we would like to cover a few tips. I have performed lots of different tests on creatives, as has my team, so we decided on a few things that can make your life easier while testing creatives.
First, use a strong audience. If you are not using a strong audience, the top creative or average creative might show you a different results because you don’t know if the fault is on the creative or on the audience. It’s better to use something that you already know is working and has thousands of installs and good performance in terms of FTDs and revenue and other. This is the first tip.
The second tip is to take out all the engagement features that you have on Facebook. Usually, you upload the creative, you get a top creative, it gets tons of likes and shares and everything. When you’re using the same creative for the test, take out these engagement features; don’t use the same post because you want to have a clean test.
Usually, when you use creatives with a past performance history, it’s not a clean test anymore because this creative already has old engagement metrics and users who are watching this ad might say, “Okay, great. This one has tons of shares and likes, this one doesn’t.” Which one is going to work better? The one with the shares and the likes. So don’t forget to take these out and don’t use the same posts.
The third tip is to use the right ad size. After we tested lots of different sizes and did tons of investigation about it, we saw that there is a one size for Facebook that is working across all placements and this is something that you should test as well, of course, in and for your experience. Still, based on our experiences, you can say and know as well that you have one size, it’s working for everything.
Fourth, while using videos, you have two different types. First of all, you have the cinemographs. From our tests, 15 seconds is something that can work well for you. And if you are using videos, 30 seconds, works well. Either way, you need to have a long flow and you need to understand what’s happening in the video.
The fifth and most important part is to have a gameplay in the first three seconds. After the first three seconds, your users are likely to start going away, but if your video is really good and it’s strong, you can get a real engagement for your users for up to 30 seconds on average. However, don’t forget, it’s important in the first three seconds to help the user understand the ad’s message, your app, it’s vertical, etc.
So that’s it for today. Now, it’s your time to be creative and think outside of the box. For comments and questions, you can leave them in the box below. For more MAMA Boards, you can click this link. Thank you for watching! Goodbye.