Performance marketing insights, discussions and expertise

Here’s the Big Mistake Most Attribution Companies Make When It Comes to Ad Fraud

By on August 11, 2017

Read the original post published by AdWeek.

Opinion: The only way to conduct meaningful measurement is to look at the activity with a wide lens.

As with all crimes, it is a truism that ad fraud will forever grow more sophisticated.

It’s also a truism that in order to keep up with any crime, you need people just as dedicated to fighting it as the people dedicated to profiting from it.

That’s why it’s curious when I hear measurement companies across the marketing technology spectrum playing down the importance of fighting fraud, essentially saying that since the problem won’t go away, we should therefore find ways of buying and measuring media that make fraud irrelevant to our analysis.

It seems a noble enough cause, but when you dig deeper, this assertion is actually quite troubling.

Let’s put aside for a moment the moral and economic issues of ignoring fraud and examine the claim itself: Measurement companies that provide insights without vigilantly accounting for fraud in their analyses aren’t really providing accurate insights.

Some attribution companies, in particular, unable to simultaneously be experts both in advanced measurement and ad fraud detection, have been at the forefront of this tale, claiming that media buyers should move beyond “simplistic” fraud assessments and focus instead on what truly matters: performance.

Over the past year I’ve heard this thesis in numerous flavors, most memorably here, here, here and here.

Advanced algorithms that correlate spend with performance, the implicit argument goes, by definition devalue fraudulent impressions because bots don’t perform. So, if you’re using attribution technology to help properly allocate media dollars, focus on performance and you’ll get to the right place.

If only it were that easy. This line of reasoning misses the mark in two really important ways: First, it assumes that bots don’t perform, and second, it assumes that all fraud is bot traffic.

Let’s take those one at a time. “Perform” doesn’t just mean make a purchase. Bots can and do perform meaningful actions other than browsing the web. Bots click on ads, bots complete pre-roll videos, bots accumulate attention time—heck, bots even fill out lead forms.

Across the entire marketing funnel, brands might reasonably consider any of these metrics to be key performance indicators and, therefore, optimize toward them.

Unfortunately, then, any system that cannot separate bots from humans would also, ipso facto, not be able to distinguish between media that were actually performing well against KPIs and media that just seemed like they were performing well against KPIs. Consumer journey analytics would be polluted by bots in all circumstances.

One rebuttal here might be to suggest that if a marketer focused on only customer journeys leading to online purchases (rather than on engagement metrics or aggregate media performance), it wouldn’t be an issue. Fraud, the claim continues, would naturally be removed from the analysis because bots would never appear in a conversion path.

That brings us to the second fallacy: that all fraud is committed by bots. The truth is that there are many common fraud techniques—device hijacking, ad stacking and click injection, to name a few—that cause a real user on a real device to technically receive an ad or perform an action they otherwise wouldn’t have.

In such cases, these fraudulent events would actually appear in a conversion path (even one where conversion was defined as a purchase), and algorithms not specifically filtering fraud would erroneously assign incremental credit to each of those touchpoints.

For example, suppose a user clicks on a video ad and then returns to the website a couple days later to purchase. The video ad should presumably receive all the credit. Now let’s suppose that between the click and the purchase, the user reads a long-tail blog that loads an ad from the same brand into a 1×1 pixel. An attribution model that can’t detect this as fraud would mistakenly allocate partial credit to the blog (or its ad network).

With impressions like these frequently plaguing campaigns at double-digit impact, not having a robust fraud detection mechanism would throw off the entire analysis.

This problem becomes even more magnified if you consider viewability. Just as attribution models should know to assign zero incremental value to an invalid impression, so, too, should they give zero incremental value to a valid impression that just never had the opportunity to be seen (since neither touchpoint had any influence on user behavior).

Yet some attribution companies have shrugged off the need for this detection, as well, perhaps under the same belief that one should trust the model even if the underlying data tells a different story.

Whether it’s knowing if an event was fraudulent, knowing if an impression was viewable or even knowing whether two devices belong to the same person, the only way to conduct meaningful measurement is to look at the activity with a wide lens, leveraging data wherever possible in order to create a more nuanced, reliable picture.

Otherwise, it’s just garbage in, garbage out.

Zach Schapira is global product strategy lead at digital marketing platform Impact Radius.