The AppsFlyer Performance Index - A Look Behind the Scenes | AppsFlyer
4 Min. Read

The AppsFlyer Performance Index – A Look Behind the Scenes

Avatar Shani Rosenfelder Oct 26, 2017

The AppsFlyer Performance Index – Edition V came roaring out a couple of days ago and the feedback / buzz has been extraordinary. We’re really excited to see how the Index has truly become an industry standard and the talk of the town.

Because of its high impact, we wanted to pull back the curtain a bit to give you a peek how the index was born, and why Edition V was such a big step forward.

The Birth of the Index

When I joined AppsFlyer in early 2015, I immediately noticed a black hole. A void. The mobile app industry was showing signs of maturity as mobile had already taken over our lives. And yet, app marketers didn’t really have a clue — on an industry-wide level — which media sources were the top performers. With so many companies out there, how does one choose where to invest their hard-earned dollars? Increasingly fierce competition has led many to over-promise, with much talk about big data, optimization, machine learning algorithms, quality, and an ability to deliver ‘the right ad to the right user at the right time.’ Easier said than done.

As the global leader in mobile attribution and marketing analytics, our scale and unbiased position in the market enables us to offer app marketers the most comprehensive, accurate and trusted mobile media report card they so desperately needed.

From day one it was clear to us that the rankings had to combine quality and quantity, as the market began shifting from a focus on installs to a focus on user engagement. And there had to be a strict entry threshold per region and per network – based primarily on client adoption and a relatively even distribution between apps running with each network. Having said that, we also understood that the threshold cannot be too high as we wanted to give room to smaller networks that performed well – after all, a diversification of spend across many networks is super important. And many of these networks did deliver great results for marketers.

When the first index was released, we could feel its impact and confirmed that initial feeling: indeed, there was a vacuum to fill and the market was hungry for this resource.

Edition V

Fast forward to August 2017, when we held our first discussions about Edition V. Very quickly, it was clear that the index was ready to be taken to the next level. And we knew that meant focusing heavily on fraud and ROI.

Fraud

The previous H2 2016 edition of the index had already factored in fraud. Back then we thought we had it mostly covered. But when a market moves so quickly, it was only during early 2017 that we realized how rapidly the scope of fraud was evolving — particularly a new type we identified as DeviceID Reset Fraud.

Shockingly, we found that DeviceID Reset Fraud was responsible for over half of mobile app install fraud, costing advertisers $1.1-$1.3 billion annually! Not to mention the alarming growth of fraud types that seek to game attribution companies — particularly via click flooding, install-hijacking and click-hijacking — which, when grouped together, can be referred to as poaching fraud.

So fraud had to be an integral part of the rankings. It affects volume because of inflated install numbers as a result of both DeviceID Reset Fraud and poaching fraud: one invents fake users from real devices, while the other steals real installs (organic and non-organic). Fraud also has a significant impact on quality scores, making certain networks appear to have better numbers thanks to stolen high quality organic users.

First, we wanted to clean up the data. That meant removing installs where needed and lowering the quality score based on the relevant fraud rates per network (essentially, this is based on crunching aggregated data from numerous fraud signals at the site ID/publisher level).

But obviously it could not have ended there. Ranking networks based on performance delivered by their clean inventory would be completely misleading, so we factored-in a penalty to the clean data ranking – each network according to its relevant fraud rate.

Initially, we used global fraud rates per network but then found fraud varied significantly for different networks in different parts of the world. So we did it again, this time applying regional fraud rates in relevant indexes.

We also had extensive discussions on whether we should exclude networks whose fraud rate passed a certain threshold in a particular region. After all, fraud was already heavily factored in the rankings. Eventually we decided to go through with it. The fact that we ended up looking at fraud at a regional level gave us more confidence to remove bad actors as some were excluded from region A, but made it in region B, etc.

We hope that the index will further help our fight to clean up the ecosystem – raising awareness from both advertisers and networks alike.

ROI

When we talk about KPIs, ROI stands above all. It’s about the bottom line – Money. In the past year, AppsFlyer has significantly increased the number of networks that provide us with their cost data – now numbering at nearly 100.

In parallel, as our scale continued to grow, more and more advertisers implemented purchase event measurement. With sufficient scale of cost and revenue data, we set out to create our first ROI Index. But since an accurate ROI calculation would have to include only apps that measure both cost and revenue, we used 800+ apps (vs. the 5,500 used in the Performance Index). And because we never compromise on scale thresholds to rank networks in indexes, we sufficed with a global ROI ranking for iOS and Android.

Since ROI is obviously a quality-driven KPI, fraud was once again involved. First by reducing the revenue figure accordingly, and then by applying a penalty on the clean ROI figure.

Enhancing Quality Score with Average Sessions Per User

With app engagement garnering so much attention (and rightfully so), we sought to beef up our quality score. Although the retention score in previous indexes had been based on a significant amount of data, we wanted to add another quality-driven KPI – average sessions per user.

Of course retention is super important, but it only offers one type of quality assessment that focuses on how long users keep using an app. Average sessions per user looks at how often an app is used – obviously another vital part of an app’s success.

To Sum Up

We’re happy to see just how valuable The AppsFlyer Performance Index is for app marketers. And we constantly seek to improve it as the market rapidly changes so that advertisers can have a proven, (heavily) data-driven report card on the performance of their mobile media sources. 

I can’t wait to see what we can bring to the table for the next index. See you at MWC!