By Thomas Vladeck, Co-Founder of Recast
Every few months, a new measurement platform promises to be marketing’s “single source of truth.” The pitch is seductive: one dashboard that shows exactly what’s working and what’s not. No conflicting data. No uncertainty. Just precise answers about where to spend your next dollar.
But there’s a fundamental flaw in this logic that few acknowledge. True incrementality — the business impact marketing efforts actually cause, the very thing measurement platforms aim to uncover — is fundamentally unknowable.
To borrow from Plato, marketing measurement is best understood as the shadows on the cave wall. We never see true incrementality directly. Multi-touch attribution (MTA), experiments, media mix modeling (MMM), and other methods each cast a different shadow. None reveal the object we’re searching for.
The danger isn’t using imperfect methods. It’s believing that any single one reveals the truth.
Multi-Touch Attribution: Clean Interfaces, Messy Causality
For years, digital-tracking-based attribution was the default answer for marketers. MTA platforms track user journeys, assign credit across touchpoints, and recommend optimizations toward “best-performing” channels.
Attribution excels at telling you which touchpoints were present on the path to conversion. What it cannot tell you is which touchpoints caused the conversion.
This is why channels like Branded Search, Paid Social Retargeting, and Affiliates often appear wildly effective. They sit closest to purchase. Meanwhile, upper-funnel channels like YouTube, TV, or other reach campaigns look weak, even when they generate the demand that attribution credits elsewhere.
The uncomfortable truth is that attribution is often correlation masquerading as causation, wrapped in a layer of false precision that makes the numbers feel more certain than they are. Attribution can be useful, but should never be mistaken for causal truth.
Experiments and MMM: Better, Still Not the Answer
As attribution’s limitations became clear, the industry shifted toward methods that at least attempt to measure causality: experiments and media mix models.
This is progress, but it’s a mistake to treat these methods as new silver bullets.
Experiments are one of the most rigorous tools we have, but running them in the real world is messy. They’re slow – often taking 4–8+ weeks to complete – are difficult to run across many channels, and frequently return wide uncertainty intervals that are hard to act on.
A lift test estimating a 4.2x ROI with a range of 1.1x to 8.8x is directionally useful, but that range might span “worst channel” to “best channel.” Experiments are also time-bound – run the same test three months later and you may get a very different answer due to seasonality, creative changes, or shifting market conditions.
So while experiments are better than attribution’s false certainty, they similarly fail to give us Truth.
Media Mix Modeling promises a holistic view: feed in historical data, get channel-level incremental ROIs that account for channel saturation and time lags. In theory, it’s elegant. But in practice, MMMs are trivially easy to build and extremely hard to validate.
With enough free parameters (MMMs have many), you can make a model say almost anything. Traditional fit metrics like R-squared or statistical significance also tell you nothing about whether a model captures real causality; a model can fit historical data perfectly well and still be wrong about what actually drives business outcomes.
The core problem is that marketing measurement is a causal inference problem. We’re trying to estimate a counterfactual – what would have happened if we’d deployed media differently – that can never be observed. In practice, most MMMs are wrong in their attempts to estimate this and will fail basic validation checks when put to the test.
Stop Looking for Tools. Build Systems.
So should marketers give up on measurement? No. The answer isn’t finding the perfect tool. It’s building a system that’s aware of the imperfections of each component method.
The teams that win don’t search for silver bullets. They build measurement systems grounded in three principles:
- Validate with forecasts, not fit metrics. If a model understands causality, it should predict future outcomes even as a media mix changes. Document forecasts, compare them to reality, and judge models by their forward accuracy rather than historical fit.
- Cross-validate across methods. If your MMM says a channel delivers 5.5x ROI but a recent experiment shows 1.1x, something is wrong. Disagreement is not a failure, but a signal.
- Create continuous feedback loops. Run small experiments constantly: holdouts, go-dark tests, new channels. Each test either reinforces your models or exposes where they’re broken.
Over time, these methods reinforce and correct each other. Hypotheses emerge, experiments test them, models update, forecasts improve, and the cycle repeats.
All We See Are Shadows
Marketers are stuck in Plato’s cave. Attribution, experiments, and MMM each cast different shadows. None reveals true incrementality. The truth remains unknowable.
But progress doesn’t require certainty. It requires humility, iteration, and systems that make us slightly smarter each time.
The teams that win stop searching for a single source of truth. They build something better.
And “better” isn’t the consolation prize. It’s the only attainable goal of marketing measurement.

