All Case Studies Lessons Learned

The Channel We Thought Was Working

22%

Of 'Conversions' Weren't Real

<8%

True Incremental Contribution

30%

Budget Efficiency Improvement

The Story

For months, one of our paid channels looked like a solid contributor. Platform reporting showed it driving approximately 25% of our installs at an acceptable CPI. We were allocating meaningful budget to it. The dashboards were green. By every standard metric, it was working.

Then we ran our first geo-lift incrementality test.

When we suppressed paid spend in the control region, organic installs barely changed. The "conversions" this channel was claiming credit for were happening anyway. True incremental contribution: less than 8%. We had been spending thousands of dollars per month on a channel that was almost entirely cannibalizing organic growth — and our attribution model was hiding it.

What I did

Redesigned our entire measurement approach. Built the incrementality testing methodology described in the Incrementality Testing case study. Implemented incrementality-calibrated attribution that adjusts platform-reported numbers with geo-lift coefficients. Reallocated the wasted budget to genuinely incremental channels, improving overall budget efficiency by 30%.

What I learned

Attribution tools tell you who gets credit. Incrementality tests tell you who deserves it. These are fundamentally different questions, and most marketing teams never ask the second one. The months of wasted spend were the cost of learning — and the incrementality testing capability that resulted is now the strongest skill on my resume.

What I'd do differently

Run the incrementality test in Month 3, not Month 7. The measurement infrastructure should come before scaling, not after. Every month of delayed incrementality testing is a month of potentially wasted budget you can never recover.