Social Media on iOS hit 275% Real Users Lift: three in four installs were fake for a full quarter
Spoofing is the fastest-rising fraud technique in 2025, and it fabricates everything from scratch
Introduction
The paradox of mobile fraud in 2025 is that detection improved across the board, and fraud rates barely moved. But fraud rates are not the only metric to watch. As marketing investment grows, so does the absolute number of fraudulent installs. A flat rate on a larger spend still means more budget wasted, more cohorts polluted, and more growth that never actually happened. The story is not in the aggregate numbers but in the distribution underneath them: which verticals absorbed the most pressure, which channels absorbed the overflow when others tightened, and which techniques evolved fast enough to stay ahead of the systems built to catch them.
The most direct measure of what fraud costs is not the fraud rate itself but the proportion of real users in what you bought, and what that proportion means for the growth you think you are seeing. Real Users Lift puts a number on it. In Gambling on Android, advertisers paid for nearly two fake installs alongside every real user acquired. In Social Media on iOS, Real Users Lift reached 275% in Q2-2025: for one full quarter, three in four installs were fake, and every growth metric built on that data was measuring something that did not exist. The installs were there. The users were not.
Three patterns run through every block of analysis. Fraud migrates: close one channel and volume shifts to wherever scrutiny is thinnest, whether that means organic, owned media, or a more sophisticated technique. Vertical risk is not uniform. A moderate platform-level fraud rate can conceal a category running at 59% next to one running at 7%, and the gap between them is not a data artifact but a structural feature of how fraud operators target high-payout environments. Sophistication is rising. Spoofing, which fabricates every signal from scratch rather than manipulating real ones, was the fastest-rising fraud technique throughout 2025, outpacing overall install growth every quarter.
That is what this report provides. We examine how fraud shifted across platforms, verticals, and channels in 2025, analyzing where organic and affiliate fraud concentrated, how the affiliate-to-SRN risk gap widened to 36x, which techniques accelerated and which plateaued, and what Real Users Lift reveals about the distance between reported performance and actual user acquisition. The goal is not to document that fraud exists but to show where it is sitting right now and where the data suggests it will move next.
* All results are based on fully anonymous and aggregated data. To ensure statistical validity, we follow strict volume thresholds and methodologies and only present data when these conditions are met.
Android fraud diverged sharply by vertical in 2025; Gambling only category still rising
Overall iOS fraud dropped 33% YoY from 17.5% in Q1 2025 to 11.7% in Q1 2026, crossing below Android’s 14% to 15% range by Q4 2025. Android held largely flat across the year. The shift reflects a genuine improvement in iOS traffic quality, but the more revealing story is what’s happening at the vertical level.
For the categories most advertisers spend heavily in, Android fraud rates are moderate but persistent. Gaming runs at 7%, Shopping at 10%, Finance at 31%. These are the baselines most performance teams are working against.
The outlier is Gambling. Android fraud in that vertical rose 20% YoY from 49% in Q1 2025 to 59% in Q1 2026, peaking at 64% in Q4. It is the only vertical where fraud is still accelerating. Every other category either held flat or improved YoY. High payouts, rapid market expansion, and concentrated Q4 spend create exactly the conditions fraudsters are built to exploit.
Regional patterns add another layer. Vietnam dropped 31% YoY from 29% in Q1 2025 to 20% in Q1 2026, but remains well above the global average. Indonesia follows a similar arc, dropping 26% from 17% to 13%. Both markets still run elevated fraud rates. Brazil has been creeping up, from 10% in Q1 2025 to 12% in Q1 2026, pointing to systematic rather than opportunistic activity. The US stands apart among developed markets: Android fraud sits at 19% in Q1 2026, down only modestly from 21% a year earlier, making it the most persistently high-risk developed market in the analysis.
App install fraud rate trend
Organic and affiliates drive 9 in 10 fraud installs, and split between them is revealing
Two channel types account for the overwhelming majority of fraud installs across both iOS and Android combined: organic at 52% and affiliates at nearly 40%. In AppsFlyer’s measurement, organic covers all installs that couldn’t be attributed to a paid source: web, cross-channel, offline, and anything that didn’t match a campaign. Organic fraud shows up in two ways: fraudsters deliberately routing fake traffic through organic channels to wash new devices into the ecosystem, and fraud attempts on paid sources where attribution fails and the install defaults to organic.
Either way, it rarely gets scrutinized, which is exactly why fraudsters use it. The real damage is subtler: organic is every company’s internal performance benchmark, so when it’s polluted, it distorts the baseline that everything else gets measured against. Affiliates follow a different logic: the more intermediaries between an advertiser and the actual traffic source, the harder it is to verify what you’re buying. SRNs sit at the opposite extreme with just 1.5% of all fraud — their closed-loop measurement leaves far less room to manipulate.
The vertical breakdown makes the pattern concrete. The motivation never changes — only the channel does. Closing one channel doesn’t eliminate fraud, it relocates it. In Finance, affiliates fell from 58% to 49% of all Finance fraud while organic rose from 35% to 46%. By Q1 2026, both channels carry roughly equal shares. Not an improvement. A migration.
Gaming runs the opposite playbook: 80 to 84% of Gaming fraud flows through organic, where bot farms generate fake installs with minimal scrutiny. Affiliates account for just 6 to 7% of all Gaming fraud and that share held flat all year. Food and Drink is different again: organic accounts for 43 to 64% of fraud there not because of poor media buying, but because of promo abuse. Bots gaming offers, referral bonuses, and first-purchase discounts. The fraud isn’t in the paid channel. It’s built into how these apps grow.
Country patterns tell the same story at a market level. In India, affiliate share of fraud grew 14% YoY from 57% to 65%, the highest concentration of any major market. The UK rose 35% YoY from 42% to 57%. Pakistan surged 74% YoY from 32% to 55%. The more these markets scale, the more fraud flows through intermediaries.
Fraud distribution by media type
The affiliate-SRN gap hit 36x — and fraud keeps migrating to wherever scrutiny is thinnest
Affiliates run a ~40% fraud rate. SRNs run at around 1%. That gap held above 30x every quarter and widened to 36x by Q1 2026. The channel you buy through is one of the most consequential decisions in a fraud strategy.
The affiliate rate is structural, not cyclical: ~40% every single quarter of the year. But within that flat average, verticals are diverging sharply. Gaming is the one genuine bright spot: its affiliate fraud rate dropped 51% YoY from 29% to 14%, the steepest channel-level improvement in the dataset, the result of years of hard KPI enforcement. Shopping moved in the opposite direction, rising 31% YoY from 27% to 36% and peaking at 41% in Q4, when holiday campaigns onboarded new networks with looser controls. Social Media is stuck at nearly 50%, flat all year and running 8 to 9 percentage points above the global affiliate average.
Two other channels are accelerating. Owned Media fraud rose 221% YoY from 3.4% to 11%. DSP fraud rose 59% YoY from 5.6% to 8.9%. The pattern is the same: as scrutiny tightens in one place, fraud moves to the next.
Organic offers no escape in high-value verticals. Gambling organic runs at 50%, higher than the affiliate average across most other categories. Finance organic sits at 32%, essentially flat YoY. The blend-in effect is consistent: as paid channels tighten, sophisticated fraud shifts to wherever detection is weakest.
Country patterns break into three groups: markets where fraud fell across both channels (Indonesia, India, Brazil, Nigeria), markets where organic improved but affiliate worsened (US, UK, France, Spain, Germany), and markets that show the signature of a deliberate fraud campaign (Vietnam, Singapore, China). India is the most consequential improver: it accounts for nearly 28% of all global Android fraud, so even modest declines there move the global needle more than eliminating affiliate fraud entirely in most other markets.
In the US, UK, France, Spain, and Germany, affiliate rates rose steadily through 2025 — the same markets that responded most aggressively to the Q2 organic spike. UK organic ended at 9.2%, US at 12%, both well below the global average. The markets best at fixing organic left their affiliate channels undefended.
Vietnam ran 33 to 34% organic fraud for three straight quarters, then collapsed in Q4, a deliberate operation that was eventually blocked. Singapore showed the same spike-and-drop pattern. China is different: its organic rate is permanently elevated because 97% of traffic routes through organic and all fraud is device-based, with no seasonal component.
Fraud rate by media type trend
Spoofing is the fastest-rising fraud type in 2025, with fake installs dominating on both Android and iOS
Spoofing was the fastest-growing fraud technique in 2025. Unlike hijacking, which steals credit for real installs prior to attribution, spoofing fabricates everything from scratch: fake devices, fake users, fake in-app events, all of it designed to look like legitimate traffic. Spoofed installs grew at multiples of overall install growth quarter after quarter, with the spoofing rate increasing significantly between Q3-2025 and Q1-2026.
As traditional techniques were progressively blocked, fraudsters moved toward more sophisticated alternatives. Spoofing represents the endpoint of that progression: rather than manipulating signals around an app, it injects entirely synthetic installs and events, mimicking real user signals, device fingerprints, and behavioral patterns.
Fake installs dominate over hijacking: 87% of Android fraud and 92% of iOS fraud. The two platforms are caught through different mechanisms. Android fraud is built on device infrastructure: emulators, device farms, and complex behavioral patterns. On iOS, store validation accounts for 70% of detected fraud, catching installs with fabricated signals that don’t match genuine App Store records. A growing share of iOS fraud is caught via behavioral anomalies after install, reflecting fraudsters investing in more convincing post-install behavior as detection tightens.
Hijacking works differently. Rather than fabricating installs, it intercepts credit for installs that were already happening. On Android, fraudsters monitor OS signals to detect downloads in progress and claim attribution at the last moment. On iOS, they flood the system with fake clicks just before organic installs happen. Finance is the largest Android hijacking target at 33% of all Android hijacking, nearly tripling in volume across 2025. On iOS, the US leads at 27% of all affected installs, with a 136% Q4 surge aligned to peak Shopping season, when the economics of stealing attribution are at their highest.
The vertical patterns on Android show how detection shapes technique. Gaming fraud shifted from device emulators (down 32% YoY) to physical device farms (up 72% YoY) as Gaming’s rich post-install data — session depth, retention, level completions — made emulators easier to catch there than anywhere else. Finance attracted the highest share of AI-based behavioral detection, driven by post-install verification steps like registration and ID upload. In both cases, the fraud method in use is a direct response to whichever detection layer is weakest.
Fraud distribution by reason
Fraud sub-reason distribution
Store validation fraud targets categories by design, not geography
Every time a user installs an app, the app store generates a receipt confirming it was a real, legitimate download. Store validation fraud works by fabricating these receipts and submitting them to ad measurement systems to claim credit for fake installs — no real device, no real user required, at near-zero cost per attempt. On iOS, where this technique is most concentrated, store validation accounted for 67 to 73% of detected fraud throughout 2025, making it the dominant fraud signal on the platform.
The share peaked at 73.1% in Q2-2025 and fell 24% YoY to 50.8% by Q1-2026. That decline does not reflect a drop in absolute volume. It reflects the rise of behavioral anomalies as a secondary technique: fraudsters are now combining fabricated receipts with fake in-app behavior, which registers as a separate fraud signal. The two methods are increasingly being used together.
What determines whether this type of fraud is viable is the type of app, not the location. In categories where the action after install is simple — a free download or a basic account signup — fabricating a convincing receipt requires almost no sophistication. Graphic & Design (99.8%), News & Magazines (99.0%), and Social Media (98.2%) are near-universally caught by this method.
Categories where the advertiser only pays when a user completes a high-value action resist it: Gambling (39.3%), Finance (54.3%), and Transportation (43.3%) see lower rates because the fraud payout depends on actions that can’t be faked at the install level alone.
At a country level on iOS, Spain sustains the highest store validation rate of any major market throughout the period, ranging from 55% to 82% and peaking at 81.7% in Q4-2025. High ad spend on Finance and Shopping apps, combined with a relatively small market, makes it a high-return, lower-scrutiny target. The US and UK (both ranging from 46% to 69%) are the largest iOS fraud markets by volume but show lower rates than smaller markets, reflecting stronger verification infrastructure.
Store validation fraud share by platform
Real Users Lift: how much of what you bought actually existed
Real Users Lift measures the ratio of fake installs to real ones. A 20% lift means that for every real user acquired, four fake installs came alongside it, diluting retention cohorts, LTV models, and ROAS calculations by the same margin. Every business decision built on that data carries the same distortion.
The most extreme case in the dataset is Gambling on Android. At 175% lift in Q4-2025, advertisers were paying for nearly two fake users for every real one. The category’s high payout has made it so economically attractive to fraudsters that the majority of installs being bought are not real. Every benchmark, cohort, and ROAS figure from Gambling Android campaigns is built on a foundation where most of the users don’t exist.
Social Media on iOS reached 275% lift in Q2-2025, meaning advertisers were getting one real user for every four they paid for. The recovery to 15.6% by Q1-2026 is the most dramatic improvement in the dataset. It also illustrates how quickly and completely a category’s measurement can be compromised: for at least one full quarter, every Social Media iOS campaign metric was built on severely distorted data.
At the platform level, iOS is now cleaner than Android, a reversal from earlier in the period. iOS lift improved 38% YoY from 21% to 13% while Android held flat at around 17%. Buying on iOS now delivers proportionally more real users than Android for the first time.
Finance Android is the clearest case of structural stagnation. Lift has not moved in five quarters, locked at 50 to 53%. For every real Finance user acquired on Android, advertisers have consistently paid for one fake alongside it, all year, with no improvement. Given Finance’s scale on Android, the entire Finance Android measurement baseline is overstated by approximately half.
Vietnam Android shows what the opposite looks like: lift dropped 51% in a single quarter, from 41% to 20%. That speed is the point. Lift does not improve gradually through measurement refinement. It improves suddenly when a specific fraud operation gets shut down, making it an operational metric, not just a reporting one.
Real users’ lift trend
- As rewarded advertising scales globally, how is the fraud landscape evolving within this channel — and what makes it particularly susceptible (or resilient) compared to other UA channels?
- Rewarded advertising shifts fraud risk from the supply chain to the end-user. Unlike traditional UA fraud, which aims to manipulate attribution and siphon ad spend at a network level, rewarded fraud is driven by individual users seeking personal profit. This channel is uniquely susceptible because the incentive is direct: real-world value motivates individuals to employ automation, emulators, or scripts to exploit the system. However, rewarded advertising is inherently more resilient than traditional channels that are vulnerable at the install layer. Because fraudsters must target deep-funnel milestones to trigger a payout, they face significantly higher friction. This complexity, combined with multi-party oversight from the advertiser, the network, and the MMP, makes it much harder to manipulate the journey without detection. Ultimately, by moving rewards away from the install and toward deep-funnel engagement, you make fraud difficult for the individual actor, regardless of how much they want the rewards.
- Fraud is increasingly shifting from install manipulation to post-install behaviors. What are the most critical fraud patterns you're seeing today in rewarded campaigns, and how are they impacting performance?
- adjoe’s dedicated anti-fraud unit focuses on stopping fraud across multiple layers. For us, the focus has always been multi-layer: preventing post-install fraud, where users attempt to fake in-game milestones to claim rewards, and blocking access to our campaigns entirely to remove the opportunity for malicious activity. The first step involves using our data and AI to block fraudulent users before they enter the rewarded experience. Our second layer occurs during the gameplay journey: by leveraging internal data and collaborating with MMPs like AppsFlyer and our long-term advertising partners, we ensure users are blocked at any sign of potential risk. Our anti-fraud suite is designed with two primary goals: protecting advertiser budgets and ensuring the profitability of the publishers issuing the rewards.
- Many advertisers still rely on CPI or early funnel metrics, even in fraud-heavy environments. How should marketers rethink measurement in rewarded campaigns to better account for fraud and focus on true user quality?
- To move beyond a reliance on CPI or early-funnel metrics in fraud-heavy environments, marketers must rethink measurement by evaluating both their internal toolkits and the abilities of their networks’ anti-fraud suites. A critical step in focusing on true user quality is integrating purchase validation into your ROAS evaluation. We recommend utilizing tools like AppsFlyer’s IAP verification to combat a specific, growing fraud scheme: refund exploitation. In high-value rewarded campaigns, fraudsters make an in-app purchase to claim a significant reward — often $50 or more — and then immediately request a refund from the app store. Without connecting refund data to your UA sources, you cannot calculate the true ROI of campaigns and can be scaling networks providing inauthentic purchase behavior. By integrating purchase validation, the MMP receives a security token for every transaction, ensuring the purchase is legitimate and final. If you aren’t connecting these dots, you will continue pouring budget into channels that appear successful but are actually draining profit.
- Fraud signals are becoming more accessible through MMPs and internal data. How can partners like adjoe use these signals not only to block fraudulent traffic, but to actively optimize toward higher-quality, fraud-resistant users?
- Optimization starts with distinguishing legitimate behavioral patterns from malicious ones. At adjoe, we use internal data to build a baseline of real user behavior, allowing us to identify and block fraud both before a user enters a format and throughout the post-install journey. We enrich these internal signals by layering AppsFlyer’s anti-fraud suite. We also recommend that our partners enable specific tools, such as AppsFlyer’s purchase verification, to create additional defensive layers. This deep integration does more than just stop bad traffic — it creates a feedback loop that allows us to actively optimize campaigns toward fraud-resistant, high-quality users. By aligning the advertiser’s goals, AppsFlyer’s signals, and the adjoe SDK, we ensure that ad spend is continuously funneled toward genuine engagement.
- In today's ecosystem, fraud prevention depends on collaboration between advertisers, measurement platforms, and media partners. Where does this collaboration break down most often, and what does “good” look like in practice?
- Effective fraud prevention depends on proactive transparency. Collaboration most often breaks down when one party — whether the advertiser, the network, or the MMP — makes structural changes to their product, placements, or logic without informing the others. When data shifts unexpectedly, legitimate growth can be misidentified as fraud, and actual fraud can be overlooked as a technical glitch. In practice, “good” collaboration is about aligning on expected behavior before it happens. A prime example occurred during adjoe’s 2025 Super Bowl campaign. With 128 million viewers tuning in, one of our publisher partners expected to acquire roughly 10 million users from their campaign — meaning more traffic and more installs in a short time. Success required a three-way alignment: Advertisers were informed of the upcoming scale so the massive influx of traffic was expected and planned for, rather than flagged as a bot attack. MMPs (AppsFlyer) were prepared for the surge in data flowing from adjoe’s platform. And adjoe monitored the traffic against this specific context to ensure quality stayed high despite the volume. When all parties are aware of the expected behavior, the ecosystem remains stable. Collaboration shouldn’t be a reactive post-mortem on why a campaign was blocked. It should be a proactive synchronization that ensures every partner is ready for the scale.
- As fraudsters become more sophisticated and privacy reduces available signals, how do you see mobile ad fraud evolving — particularly in rewarded channels — and what should the industry prioritize next?
- As the industry evolves, the way networks approach anti-fraud prevention is key. For advertisers, understanding the size and scope of a network’s anti-fraud capabilities is essential to ensuring campaigns are protected. For publishers, it ensures monetization margins remain secure. At adjoe, we have an entire tech unit dedicated to prevention and believe this should be an industry standard and a core responsibility for all networks. Fraudsters are agile. To stay ahead, defenses must be proactive and dynamic, adapting to new behavioral patterns faster than they can be exploited. If a network lacks this dedicated focus, it is only a matter of time before the next major fraud scheme emerges. Additionally, as the ecosystem expands into third-party stores, marketers must ensure their MMP and network partners maintain the same security and integration standards across these new platforms as they do on primary app stores. Ultimately, we see the evolution of fraud as a dynamic challenge, and success will depend on the speed of a partner’s response and their specialized focus on prevention.
- Fraud tactics keep evolving. Which emerging threat are you most focused on right now, and why?
- We’re seeing that the reaction time to changes and anti-fraud rules has improved significantly. I’m sure that with AI, finding new gaps is only getting easier. However, if you focus on the fundamentals (root cause gaps) — meaning you build fraud detection that closes the core vulnerabilities — I believe you can block 80–85% of all fraud attempts. I’m talking about acquisition fraud, of course. I still believe it’s the primary type of fraud for partner, affiliate, agencies, and marketing traffic. Most acquisition fraud occurs in the mobile channel and with mobile traffic.
- Organic traffic has traditionally felt "safe" from fraud. Has that assumption held up in your experience?
- Organic traffic has never really been safe, despite the common belief in the market. In my experience, organic traffic is often highly motivated and performs well, making it the most attractive target for fraudulent partners. In my opinion, organic traffic suffers the most from fraud. Here’s a tip: pull your data, review partners or channels and their activity, and compare it with organic traffic behavior. Where the patterns and cohorts align most closely, take a deeper look at other fraud metrics — you may uncover some interesting findings and potential issues.
- Different channels carry very different fraud risk profiles. How does that factor into how you plan and protect your media mix?
- Obviously, the channels you have less control over carry higher risks, but even with very controllable channels you still shouldn’t forget to test them for incrementality. Channels involving agencies and unknown media sources — along with retargeting, CTV, and impression/view-based traffic in general — carry the highest risk due to the weak anti-fraud protection available and attribution models with lower probability. Tip: review every channel carefully. If you don’t understand where the traffic is coming from and you cannot verify it with your own eyes, you probably need to review the fraud metrics for that channel.
- Fraud doesn't look the same everywhere. How do you adjust your approach when you're running campaigns across different regions or markets?
- In our case, we haven’t seen a significant difference between regions in terms of pre-attribution or acquisition fraud. Of course, it depends on your marketing mix, but it’s mostly the same even in top-tier countries. Fraud is usually higher where your brand is strongest. Basically, the better your brand converts, the better it converts for fraudsters too. And I think it’s a mistake to believe that expensive traffic is safer from fraud. Tip: if you don’t see any fraud in P360, don’t skip incrementality testing — even clean-looking traffic needs to prove it’s driving real growth.
- When you notice something off in your campaign performance, how do you go about figuring out whether fraud is behind it?
- First, trust your gut. My data shows that in 9 out of 10 cases, if you feel like something is wrong — it usually is. Either way, you need to have: global monitoring, anomaly detection (since an anomaly can be either very bad or suspiciously good traffic), and scoring based on fraud patterns you’ve already found. This way, you reduce the manual workload. The workflow is always the same: even if you don’t see any obvious issues or unusual metrics, you still need to understand what is going on at a very detailed level.
- Working with a wide range of advertisers, what changes are you seeing in how marketers approach fraud today — particularly in how it influences their measurement, budgeting, and optimization decisions?
- Fraud has moved from a technical concern to a fundamental planning-stage decision. Marketers are increasingly evaluating supply quality as a prerequisite for committing budget, assessing publisher transparency, device-signal consistency, and post-install cohort behavior with the same rigor traditionally reserved for reach and cost metrics. The scrutiny has become highly granular. Growth teams now analyze whether OS-version profiles align with regional benchmarks—where a disproportionate share of outdated versions often signals non-human traffic—and whether carrier and device-language signals remain internally consistent. While pre-campaign due diligence is essential, it is no longer sufficient; success requires continuous monitoring of agreed KPIs once the campaign is live to ensure quality persists at scale. The conversation has shifted from reactive post-campaign reconciliation to proactive pre-campaign supply diligence. In this environment, supply partners are not equally equipped. Providers utilizing a direct SDK on the publisher side operate with a superior signal layer, distinguishing genuine user activity from emulated environments through on-device signals that are far harder to spoof than those reconstructed from intermediated bid-stream data alone.
- As a performance-driven platform how does fraud in the ecosystem impact model accuracy and campaign optimization outcomes?
- Fraud corrupts the very feedback loops designed to drive optimization. Most bidding systems learn from outcome signals—installs, events, and revenue—to determine where to allocate the next dollar. When fraudulent installs enter this loop, the model misidentifies them as legitimate conversions, assigns credit to the faulty supply path, and bids higher on it. This creates a self-reinforcing cycle where budget quietly concentrates on the lowest-quality supply. Internal analysis of over 10 million supply paths reveals that fraud follows a steep Pareto curve: roughly 5% of paths account for over 65% of all detected fraudulent installs while contributing negligible genuine conversions. Bidding models optimizing strictly for cost-per-install will gravitate toward these paths due to their apparent efficiency, making pre-bid supply filtering a structural necessity. The practical reality is that filtering downstream at the in-app event level addresses only the symptom. By the time an event is flagged, the optimization model has already “learned” from the fraudulent install that preceded it. To protect the learning process, filtering must happen upstream at the install level, preventing the learning process from being corrupted in the first place.
- What are the most significant types of fraud signals that can distort automated bidding systems today, and how do you mitigate their impact in real time?
- The most damaging signals are those that masquerade as performance. Supply paths generating installs with believable CTIT distributions and device profiles may appear legitimate but produce zero downstream value. Because these paths survive initial quality checks and enter model training, they become exceptionally expensive over time. Several indicators point to fraudulent activity: abnormally high rates of new device IDs appearing in tight windows, unusually uniform traffic distributions that lack natural human variation, and metadata inconsistencies—such as mismatches between declared volume and actual installs or conflicting location signals. Fraud also exhibits high concentration. Internal bundle-level analysis shows that a small percentage of app bundles drive the majority of risk. Because exchange and country signals are more dispersed, blanket blocking at those levels causes significant collateral damage. Surgical detection requires a multi-dimensional evaluation of the full supply path, integrating bundle, exchange, country, and device signals simultaneously.
- How do you incorporate external fraud signals—such as those from MMPs—into your optimization frameworks, and what impact do they have on campaign performance?
- MMP signals provide high-fidelity, attribution-level precision through cross-network behavioral analysis and install-receipt validation. When an MMP flags fraud, the signal is definitive. However, maintaining comprehensive coverage remains a persistent challenge. On the platform side, layering these MMP signals alongside real-time behavioral data—such as render-to-click time distributions and traffic entropy—enables the sharpest detection. By combining these diverse inputs across more than 10 million supply paths, our platform maintains a fraud rate below 2%, utilizing multi-signal scoring that weights each input by observed reliability rather than relying on any single source as a definitive truth.
- Looking ahead, how do you see the relationship between fraud prevention and performance optimization evolving?
- Fraud prevention and performance optimization are essentially the same problem. Every dollar spent on unverified supply trains a model to seek out more of the same. Platforms that unify these functions will achieve a lasting edge, as their models will learn faster and more accurately from cleaner data. The critical advancement will be the move away from static rules. Rules catch yesterday’s fraud, while sophisticated IVT evolves to bypass them. This is where AI becomes indispensable—utilizing anomaly detection to learn what “normal” looks like and flagging deviations without needing to pre-label new fraud types. Fraudulent actors are also leveraging AI advancements, using on-device agents like OpenClaw variants to emulate human behavior. These bots can now emit deep-funnel events like adding to carts or completing advanced game levels, and even trigger small purchases. In this landscape, leveraging ML-based on-device SDK signals to detect fraud as early as possible is the only way to safeguard advertiser budgets. Finally, the organizational shift is equally vital. When fraud signals feed directly into the bidding model, prevention happens at the point of decision. The future lies in assigning a trust score to every supply path: higher trust earns higher bids, while lower trust leads to immediate suppression.
- Based on your experience as a DSP combating fraud, what is the most effective approach to fraud prevention, and which key actions drive meaningful reductions in fraud rates?
- From a DSP perspective, the most effective strategy is to move prevention upstream, embedding intelligence into the bidding process rather than relying on post-campaign reconciliation. This requires maximizing signals from multiple sources and building validation mechanisms that continuously refine traffic assessment before a bid is ever placed. By aggregating diverse signals—including supply provenance, device characteristics, and historical performance—and applying correlation models, DSPs can identify suspicious clusters early. These systems are further strengthened by AI-driven feedback loops that continuously improve detection accuracy based on campaign outcomes. Because fraud tactics are constantly evolving, meaningful reductions depend on the continuous analysis of massive datasets to identify anomalies with low false positives. Successful DSPs treat fraud prevention as an adaptive intelligence system, combining large-scale data analysis and proactive supply diligence to ensure only high-confidence inventory reaches the advertiser.
Fraud is not a problem you solve once. Every time a channel tightens, fraud shifts to the next weakest point: a different channel, a different technique, a different market. When Finance advertisers tightened affiliate measurement, organic fraud rose 33%. When owned media lacked scrutiny, its fraud rate grew 221% YoY. The metric to watch is not your current fraud rate but the direction of change in channels that recently came under less pressure. Fraud strategy needs to be continuously fine-tuned, not set once and left.
Organic is the baseline every team uses to evaluate whether paid campaigns are working, which is exactly why keeping it clean matters. When organic is inflated, every comparison built on top of it is skewed. Two things inflate it: deliberate fake installs designed to look like self-discovered traffic, and paid campaigns where attribution fails and the install defaults to organic. High organic share can reflect genuine app discovery, or it can signal that a portion of paid traffic is simply not being attributed correctly. Either way, auditing organic is not just a fraud exercise. It is a data quality exercise.
The affiliate-to-SRN fraud gap reached 36x in Q1 2026 and held above 30x every quarter of the year. Affiliates remain a critical part of most media mixes for reach, cost efficiency, and market access that closed channels cannot always replicate. But that gap is structural: it reflects how much more room affiliates leave for manipulation. Expanding your media mix means taking on fraud risk proportional to the channels you add. The right response is not to avoid those channels but to monitor them more closely, with tighter verification and faster action when signals shift.
At a 59% fraud rate in Q1 2026 and a Real Users Lift of 175% in Q4, Gambling Android has crossed the threshold where fraud is an exception to manage. It is the default condition to account for. Every benchmark, cohort, and ROAS figure from Gambling Android campaigns requires a different interpretive framework than any other category.
Spoofing fabricates devices, users, and in-app events from scratch to mimic genuine traffic, which means it generates clean-looking signals rather than anomalies. It was the fastest-rising fraud technique throughout 2025, and its sophistication makes it the most likely to be undercounted in any fraud report, including this one. If your fraud numbers look stable, it is worth asking what they are not yet measuring.