AppsFlyer Industry benchmarks: Frequently asked questions

How is each metric defined?

The full list covering 17 metrics and their definitions can be found here.

What is the source of data that's used in the platform?

The only source of data used for this site comes from AppsFlyer, the industry leader in marketing measurement with a global app market share that exceeds 60%. This is done via integrations with leading media partners, as well as our proprietary SDK implemented in our clients' apps. To ensure statistical validity, we follow strict volume thresholds and methodologies. All data is fully anonymous and aggregated.

How often is the data updated and when?

We update the data on a quarterly basis, usually by the 7th of January/April/July/October. A cohort of two months is applied for post-install metrics like retention rate and share of paying users, which cover data as far as 30 days after the install.

What's the difference between 'average per app' in the Performance Benchmarks section and the methodology used in other sections?

The Performance Benchmarks section uses an average per app approach. Each app that meets statistical thresholds is equally weighted, regardless of size. You can filter by app size, defined as:

  • Large: Top 20% by installs
  • Medium: 50th–80th percentile
  • Small: 20th–50th percentile

If a specific app size cohort doesn't meet the threshold, data will include all size categories that do.

In contrast, other sections use aggregated data. That means apps with more data points have a proportionally larger influence on the results. This applies to normalized trends (e.g., in Section 1), percentage splits (Sections 3 and 5), and percentage change (Section 4).

How are media types grouped?

Media types classification are based on the type of paid marketing activity:

User acquisition (UA) spend / paid installs metrics

  • Social & Search: Google Ads, Meta Ads, TikTok For Business, Apple Ads, Snapchat, and 6 others
  • Ad Networks: AppLovin, Mintegral International Limited, Unity Ads, ironSource, and 8 others
  • DSPs: Moloco, Liftoff, DV360, and 3 others
  • OEM & Preload: Xiaomi, vivo, Digital Turbine, Transsion, Oppo, and 6 others
  • Rewarded: adjoe, AdAction, Tapjoy, Mistplay, and 3 others

Remarketing ad spend / paid remarketing conversions

  • Social & Search: Google Ads, Meta Ads, TikTok For Business, Google Marketing Platform (DV360 & CM360), Snapchat, and 5 others
  • Ad Networks: Mintegral International Limited, Criteo, SHAREit, and 3 others
  • DSPs: RTB House, Liftoff, Remerge, RevX (powered by Affle), Moloco, DV360, InMobi DSP, Adikteev, and 6 others

How are statistical outliers removed?

In the Performance Benchmarks section, the top and bottom 10% of values for the selected metric are excluded. For example, if Day 7 Retention is selected, the 10% of apps with the highest and lowest Day 7 retention values are removed before calculating the average.

In sections using aggregated data (see question above), our anomaly detection combines three rules. Two flag irregularities within a single time slice (e.g., a quarter): values that make up 20% or more of the total, or are 10x larger than the second-highest, signaling dominance or extreme outliers. The third rule uses Median Absolute Deviation (MAD) to detect values that deviate from typical patterns — either across quarters (spotting shifts over time) or within a quarter (spotting group-level outliers).

Why does data appear in some cases, but does not in others?

Our methodology applies strict statistical thresholds, including a minimum number of apps, a minimum number of companies, a minimum number of installs, and the exclusion of outliers. Data is displayed only when it meets our thresholds. As a result:

  • Some options in dropdowns may not appear
  • Certain metrics may be hidden if they don't meet the minimum requirement of sufficient data for at least 4 consecutive quarters, starting from the most recent.
  • Entire sections may be excluded if fewer than two metrics are available

Every quarter we run the data to update the tool. As AppsFlyer constantly adds more and more apps to its client base, the potential of surpassing the thresholds increases.

When is data shown on a sub-region level?

If thresholds aren't met for a specific country, we follow this logic: when data is sufficient at the sub-region level (but not country level), the country will appear in the dropdown, but the results shown will reflect the sub-region, clearly labeled as such. If thresholds aren't met even at the sub-region level, the country will not appear in the dropdown at all.

Sub-regions are grouped based on the following countries:

  • North America: United States, Canada
  • Latin America: Brazil, Mexico, Argentina, Colombia, Chile, Peru
  • Western Europe: United Kingdom, Germany, France, Italy, Netherlands, Spain, Switzerland, Sweden, Belgium, Ireland, Austria, Denmark, Norway, Israel, Portugal, Greece, Finland
  • Eastern Europe: Turkey, Poland, Romania, Czech Republic, Ukraine, Kazakhstan, Hungary, Uzbekistan, Azerbaijan
  • Middle East: Saudi Arabia, United Arab Emirates, Egypt, Iraq, Qatar
  • Africa: South Africa, Nigeria, Kenya, Morocco, Algeria, Ghana
  • Indian Subcontinent: India, Pakistan, Bangladesh, Nepal
  • Southeast Asia: Indonesia, Philippines, Thailand, Vietnam, Malaysia, Singapore
  • Japan & Korea: Japan, South Korea
  • Australia & New Zealand: Australia, New Zealand

Which app categorization/classification do you use?

We use the taxonomy of Sensor Tower for both gaming and non-gaming based on its Game IQ and App IQ solutions.

Where can I request new metrics to be added to the platform?

We'd love to hear your feedback in this form.