Performance marketing insights, discussions and expertise

Confessions of a verification firm data scientist: It’s good when Facebook errs

By on May 16, 2017

Originally written by Ross Benes of Digiday.

Throughout the ad supply chain, at some point everyone has motivation for schadenfreude. For the latest installment of our anonymous Confessions series, we talked to a data scientist at a verification vendor who works with publishers, agencies, brands and programmatic platforms. The data scientist said that he roots for Facebook and Google to make mistakes and that the problem of brand safety is overblown.

Here are excerpts, edited for clarity.

Do you root for platforms like Facebook to make errors?
Quite honestly, yeah. It is good for us when Facebook makes an error because it puts pressure on them to open up to third parties like us.

Did Facebook’s measurement errors get so much press because people have been waiting for something to leverage against the platform?
The amount of attention Facebook’s errors received was fair because so much money runs through its platform. But in some cases, people rooting against walled gardens can overinflate the news.

Can you share an example?
YouTube’s brand-safety debacle. I talked to a few brand execs once the news first broke, and they understood that some of their impressions will show up in brand-unsafe places. But if you look at the impact of those impressions on a campaign, they’re negligible.

So why did it get so much attention?
Well, if you can take Google off their pedestal, that’s good for a lot of people.

Do you think brand safety is overblown as a problem?
Across the industry, it’s overblown. And that’s because people have self-interest to push it as a problem.

But don’t you have self-interest to push it as a problem?
Oh, yeah. A lot of the push against YouTube came from vendors like measurement and verification companies. And some advertisers just take in the news and adopt the narrative.

Whenever I talk to ad-fraud researchers, they always say the fraud prevention from companies like yours is behind. What do you make of that?
We aren’t catching all the fraud, but nobody can claim they are. But those people have a perverse incentive to claim as much fraud as possible so that people bring them on board as consultants.

Don’t you also have incentive to overestimate fraud throughout the industry so that people buy your anti-fraud products?
Yes, and that is the tricky thing; everyone has their own incentives. If a verification company puts out an industry number on fraud, it is in their best interest to put out a high number. The numbers we come up with have a range, and we publish the high-end estimate.

Isn’t that deceptive?
It isn’t because our methodology and data support how we got there, and you can follow it in our reports.

Fraud researchers claim that verification filters are easy to trip because thirsty salespeople sell dashboards to fraudsters who can then A/B test bots until they slip through. Do you feel that your sales team compromises your filters?
They have a fair point, and that is something we have had to work through as we’ve grown as a company. But that was more of a problem in our early days before we were accredited by the Media Rating Council.

Your company provides viewability ratings. Do you think viewability is overrated?
It is a mixed bag. I mean, everyone gets why it is bad when an ad doesn’t get viewed. But strictly focusing on viewability can do more harm than good.

Can you elaborate?
The continued focus on viewability often doesn’t take fraud into account. Some buyers will use a platform just because it guarantees 100 percent viewability. But there are ways to fraudulently increase viewability. And if you prioritize viewability over what drives conversions, you’ll end up on junk sites.