Media mix modeling is having a second life. With tracking restrictions eroding attribution accuracy, consultancies are selling MMM as the answer. Board decks are full of saturation curves and channel contribution charts. CMOs quote their MMM results as gospel in every meeting.

Most of these models are not worth the Looker dashboard they are displayed on.

What MMM Actually Is

A statistical model that tries to explain historical sales movements as a function of marketing spend (plus macroeconomic factors, seasonality, and promotional activity). It outputs a curve for each channel showing the estimated contribution and the point of diminishing returns.

Done well, it is one of the most useful tools in marketing. It answers strategic questions that no attribution tool can touch. How much should we spend on TV? Are we oversaturated on Meta? What is the right split between brand and performance?

Done badly, it produces confident-looking charts based on correlations that do not survive contact with reality.

How Most Companies Do It Badly

Not enough historical data. Serious MMM needs at least two to three years of weekly data, with meaningful variation in channel spend. Most companies feed eight months of flat spend into the model and act surprised when the results are unstable.

No holdout or validation. The model fits the historical data. That does not tell you anything about whether it predicts the future. A model that explains the past perfectly can still be wrong about next quarter.

Ignored external factors. The competitor launched a big campaign. A product stockout killed a month of Meta performance. A PR crisis tanked brand search. If these are not coded into the model, the model assigns the revenue drop to whatever channel happened to move at the same time.

Overfit on too many variables. When you have 50 variables and 150 weeks of data, the model can fit almost anything. That does not mean the relationships are real.

Lagging effects ignored. Brand spend today affects sales in future quarters. If the model treats all channels as instant, it will systematically underweight brand.

What A Good MMM Looks Like

Run by someone who understands both statistics and marketing. Long enough data window with real spend variation. Holdout testing against recent periods. External factors coded in. Results presented with confidence intervals, not single-point estimates. Calibrated against incrementality tests where possible.

That is expensive and slow. It produces results that are genuinely useful for strategic budget decisions. It does not produce snappy board slides.

The Sales Pitch You Need To See Through

When a consultancy promises you an MMM in six weeks for twenty thousand euros, you are buying astrology. The model will produce numbers. The numbers will look confident. They will be largely meaningless, because there is no way to build a real MMM in six weeks without the historical data and validation required.

The results will be used to justify whatever the CMO wanted to do anyway. Everyone will feel more scientific about it. The next quarter's performance will be unchanged.

What To Actually Do

If you have the budget for a real MMM and enough historical data with meaningful variation, it is one of the best investments you can make. Go in knowing what good looks like. Demand holdouts. Demand validation. Cross-check against incrementality tests.

If you cannot do it properly, do not do it at all. Use simpler tools. Channel-level incrementality tests. Geo holdouts. Top-line business outcomes. They are less sophisticated and more useful than an overfit MMM that nobody can defend.

Sources

No external sources. All claims are from direct audit work and publicly cited frameworks (Byron Sharp, John Dawes / B2B Institute).