When Facebook Reporting Can't Be Trusted

By: Alex Afterman, 11:11 Digital

As Facebook advertisers, we’ve been spoiled by the level of detail and specificity provided by Facebook’s reporting over the recent past. Those who remember a pre-conversions world where impressions, clicks, CPM and CTR were how you deemed a campaign a success or failure have been especially grateful to be able to work with an advertising platform that allows us to report back to clients very precise purchase, exact attributed revenue and ROAS metrics by campaign, audience and specific ad.  

However, in this new iOS 14.5 world, we’ve lost a lot of faith in Facebook’s reporting, and rightly so. 

Facebook itself tells us that it’s using modeled and incomplete data for reporting, and it’s clear that Ads Manager is not registering a large chunk of the purchase activity our ads are driving for clients. To make matters worse, the general move towards more privacy in the digital ecosystem will likely make this problem worse before it gets better.  

So, what is there to do? 

Our approach has been to focus now, more than ever on MER (Total business revenue divided by paid media spend), and then analyze Facebook’s role in that media spend. 

MER is a a bottom line metric, indicating how much was spent, and how much was made. While unscrupulous actors can game this metric by overspending on retargeting vs prospecting, for the sake of this discussion, we’ll assume your prospecting versus retargeting mix is mostly similar to what it was before the iOS changes.

For clients whose primary marketing channel is Facebook Ads, this has been quite easy to track. (It’s also these clients that initially made me lose faith in FB Ads Manager, even before the iOS14.5 ATT prompt went into effect)

In this scenario, I’d look in Ads Manager and see that the account’s ROAS had cratered, only to look in Shopify or GA and see revenue flat or trending upwards.  In one extreme case, my client was running a sale where the first day of their sale was usually the largest, and Facebook reported the daily ROAS under 1 the entire first day. BUT, in Shopify we were on our way to a record revenue day, with Facebook as the primary marketing channel. 

By the end of the day, the client had a record-setting revenue day, and now looking back on that historical data, our actual Facebook ROAS was likely somewhere just over 3X.  Ads Manager reported a 1.4X in total for the day.  Because of this, rather than focusing on Ads Manager ROAS, I began tracking weekly Facebook ad spend to total Shopify revenue. What I saw convinced me, more than ever, that Ads Manager was no longer useful for accurate reporting.  

Where this can get more complex is when there are multiple marketing channels at play, so sales can’t be completely attributed to Facebook ads.

Keep in mind that there is no perfect solution, but in these cases, I look at how changes to the various channel spends affect overall revenue. 

For example, if my client spends both on Facebook and Google, and I raise my Facebook budget, I will then look at two things – whether total revenue increased, and whether Google ad spend changed.

  • If revenue increased AND Google spend was flat, Facebook budget increase likely resulted in a positive return

  • If Google spend also changed, inferences must be made on results and return

  • If Google + Facebook spend changed in the same window, I have worked with clients in this scenario to aim to have periods where we keep one channel’s spend flat while seeing what kind of effect raising and lowering the budget in the other channel has.  

While this is certainly not exact, it helps to look at digital ad buying more directionally for optimization and generally correct insights in order to make accurate inferences, and less of looking for exact attribution. 

In theory, truly accurate attribution has always been, in my opinion, a myth.

In the realm of digital marketing, different channels will always share some amount of credit for conversions, and while you can use different models to assign credit based on where in the customer journey the various touch points occurred, you’d have to be omniscient to truly know which ad or ads had the most to do with the purchase decision. It’s impossible.

However, post-purchase surveys can help here (have you met KNO Post-Purchase Surveys?) and these are continually growing to be an even larger and more critical piece of the picture as we lose more accurate reporting. Keep in mind that while post-purchase surveys can help tell the story of attribution, they still will never be able to paint the exact, down-to-the-transaction picture we’ve become accustomed to.  

Ultimately, success or failure in digital ad buying is based upon how changes in marketing spend on various channels affect revenue generation.

In the absence of the accurate tools we’ve become accustomed to, that is what we are focusing on (and so far, it’s worked well). In very few cases have we seen a significant negative change in MER, and for some clients we’ve seen positive changes in MER.

Our hope is that Facebook comes out with tools to get us closer to the level of detail we previously have had in our ads reporting, but we’re not depending on that.

We are ultimately in a results-driven business, and that result is profitable revenue, so if what we’re doing as media buyers has positive effects on client revenue, we are going to keep doing it … and if it has negative effects, we’re going to change it.  And for now, that’s the best we can do.

Previous
Previous

Why You Should Not Pause Your Facebook and Instagram Advertising Spend

Next
Next

Account Structure Inspiration Post-iOS14.5