The “Quality vs. Volume” fallacy in user acquisition

Creative fatigue is a problem that commonly arises for marketing teams: creatives have been seen by so many people within a particular audience, on a particular channel, that they stop converting, as few people remain unexposed and thus the most pertinent opportunities — the “low hanging fruit” — have been exhausted.

When this happens, many teams begin the process of onboarding new sources of traffic to make up for the decline: they’ll run small test campaigns with new channels over a week or two and then include them into their portfolios of traffic if the results of those tests are satisfactory. Many times, these tests are in the $5-10k range in terms of budget; enough to acquire, depending on the product, some thousands of users throughout the test.

Often, these new channels are evaluated based on the quality of the traffic that was provided over the test, independent of its install volumes — that is, user-level metrics are the only things considered. User Acquisition managers can be heard to say things like, “Channel X provides great traffic, but at low volumes,” implying the existence of some indifference curve between traffic quality (cohort LTV) and traffic volume that would make a team indifferent to high volumes at low quality versus low volumes at high quality.

This isn’t really true — at least, it’s not strictly true. Firstly, the relationship between price and volume in the digital advertising marketplace isn’t rigidly inversely correlated: quality and volumes can move together in some instances.

Ad channels usually get paid when a conversion happens, which means they’re not likely to show your ad if its click-through rates are low unless the delta between your bid and the average makes up for your CTR shortfall and produces as much revenue for them. So for products with very low appeal, price and volume move in different directions: my volume of installs increases because I bid more to increase the number of times my ad is shown, but since my product isn’t broad, the ad channels begins targeting less relevant people and spamming my ad over and over until people convert. Thus, price increases and quality decreases.

But for broadly appealing products, this doesn’t happen as easily: broadly appealing products can scale their marketing spend (increase bids) and see their CTRs stay flat, and once they outbid their competitors for the most lucrative traffic, they might increase the quality of their installs. In this way, LTV and bid price move together, and volumes increase because CTRs don’t drop drastically.

But the second reason the quality versus volume fallacy doesn’t hold up in digital marketing is that there is a high opportunity cost to a marketer’s time, and it takes about as much time to optimize a campaign on a channel delivering high volumes as one delivering low volumes. So if there is an indifference curve between quality and volume, it should only exist for a team after some minimum yet substantial level of installs is being delivered by a channel.

Also: whenever a channel is added to a marketing team’s traffic portfolio, it adds complexity to the reporting and analysis processes and creates an opportunity for something in the toolchain to break. And low-volume channels simply don’t often drive revenue: user economics are important, but so is absolute revenue.

The last reason why quality versus volume is a deceptive dynamic is that volume is really a necessary precondition to understanding quality: without much data, teams often can’t really even capably measure the quality of the traffic they’re receiving. In How Much Data is Needed to Predict LTV?, I walked through the difficulty of estimating LTV with even moderately sized cohorts: when a channel is generating 20, or 50, or even 100 installs per day, the variance in the daily spend values of those users probably doesn’t allow for an actionable LTV metric to be calculated.

Generally speaking, a marketer’s time is almost always better spent in optimizing campaigns on large channels, thinking through the product’s advertising positioning, experimenting with new marketing formats, or helping the product team to understand acquisition data in making changes to the product versus onboarding new, lower-tier direct response ad channels. As a team goes down the list of channels past the biggest, best-funded, and most widely used, the likelihood of those channels being able to deliver traffic at levels that make a difference for the business decreases dramatically.

Comments: