Every week, someone pitches me a test. A new CTA colour. A slightly different headline. Moving the form from the right side of the page to the left. And every time, I ask the same question: if this works, how much money does it make us?

The answer is almost always a long pause followed by "well, it's about incremental gains." Which is a polite way of saying: not much.

The 0.03% Problem

Here is a real example. A client wanted to A/B test two landing page button variants: "Get Started" versus "Start Your Free Trial." After three weeks and a few thousand visits, the winner improved conversion rate by 0.03%.

They were genuinely excited. They put it in a slide deck. They called it a "data-driven win."

I did the maths. On their traffic and their average deal size, that 0.03% improvement translated to roughly one extra lead per quarter. Maybe. If the test result even held up outside the test window — which these marginal lifts almost never do.

That is not optimisation. That is a rounding error dressed up as strategy.

Incremental Gains Need Massive Scale

The "incremental gains" argument works. It just does not work for you.

When Facebook tests a new button placement, a 0.01% improvement means millions of dollars in revenue. When Airbnb tweaks their booking flow, even a tiny lift moves real numbers because the denominator is enormous. You are not Facebook. You are not Airbnb.

If your site gets 10,000 visits a month, a 0.03% conversion lift is three extra visitors doing what you want. Per month. You could get the same result by writing one good LinkedIn post.

Alex Schultz, who ran growth at Facebook, put it simply: if the impact is not easy to see, it is probably not there. At Facebook's scale, real improvements show up fast and clearly. If you have to squint at a dashboard and argue about statistical significance for weeks, the thing you are testing does not matter enough.

The Opportunity Cost Nobody Talks About

The real damage is not the failed test. It is what you did not do instead.

Every hour spent setting up a CTA colour test is an hour you could have spent rewriting your entire landing page around a better offer. Every sprint dedicated to testing a new channel is a sprint you could have used to fix the channel already bringing in 80% of your pipeline.

I see this constantly in audits. Teams with beautiful experiment roadmaps and twelve months of micro-tests — meanwhile the Google Ads account is bleeding 30% of budget on irrelevant search terms that nobody has checked since last summer.

Small bets are fine when you have already nailed the big things. But most companies have not. They skip straight to optimisation because it feels sophisticated and low-risk. Testing a button colour is comfortable. Admitting your offer is wrong is not.

How to Know If Your Initiative Is Big Enough

Ask yourself one question: can I explain the expected impact to my CFO in one sentence without using the word "incremental"?

If the answer is no, the initiative is too small.

Good marketing moves are obvious when they work. You launch a new landing page and conversion rate doubles. You cut wasted spend and cost per acquisition drops by 40%. You enter a new market and pipeline grows in a way that shows up on the P&L, not just in a test report.

If you need three months of data and a custom dashboard to prove something moved, it did not move enough to matter. Stop stacking pennies. Go find the big wins.