By Peter Swan
A passer-by happens upon a drunk searching for a lost wallet under a streetlight. With nothing in plain sight, the passer-by asks “Where did you drop your wallet?”. “Over there,” gestures the drunk across the street, “but I’m looking here because this is where the light is.”
We often look for answers in the easiest place and not necessarily where the answer is to be found. As marketing moves from subjective art toward objective, data-driven science, are we seeing the emergence of a streetlight effect?
Are even the very best big-data driven practises guilty of asking the wrong questions of the wrong data?
Wrong from the start
Most companies turn to analytics when early growth starts to slow. The familiar refrain, “Let’s make better use of our existing data”, heralds the onset of maturity, This when the early days of triple and double-digit growth are well and truly past.
Initial questions asked of big data are typically, “Who are our best customers?” and “Which products are most profitable?”
It soon becomes clear that performance differs by region, season and a host of other factors. So, it’s not long before we want to know, “How do quarterly sales in region A compare with region B, on products X, Y, and Z?”
Next comes propensity to respond (PTR) modelling, used to classify prospects for acquisition, cross sell, churn, or fraud. Where they exist, single customer views enable an entire family of PTR models used to determine next best actions.
Competing marketing priorities soon warrant marketing mix modelling, to estimate the value of advertising spends across different channels. This naturally leads to attribution modelling, to estimate how each channel contributes to the final sale.
The current holy grail of big-data driven marketing is to offer in real time the most likely product, at the most likely price, to the most likely customer, at the most likely time, via the most likely channel.
The past doesn’t always help predict the future
But does big data and analysis make sense in the first place?
Like the drunk under the streetlight, have we been seduced into looking for the answers where it is easiest? Namely, in the data we gathered from past sales to previous customers.
Is this relevant for understanding future sales to future customers?
Nothing in the customer data gathered, or in the way it is presently being analysed, addresses the fundamental consumer desire. This to find the best available combination of price and product at the lowest search cost.
All that segmenting and clustering and PTR scoring leaves our future consumers cold, stranded, outnumbered – feeling besieged and beset upon.
Consumers are bounded rational humans optimised over generations for “fight or flight” and not for solving the multidimensional optimisation problem that is rational consumer choice.
Tasked with buying a car, my siblings, with common genetic and environmental influences, will likely arrive at different consumption choices to mine.
If those closest to me exhibit different preferences, then why are these “previous customer” strangers with no common nature or nurture to me being used to suggest products for me?
Why model the choices of thousands of people I don’t know, and who don’t know me, in an effort to suggest products to me?
No consumer identifies with the clusters or segments thrown up by maximum likelihood models. In fact this type of modelling belies the constant state of flux wrought by Adam Smith’s invisible hand, and writ large in every single consumption choice.
It is a complex and rapidly changing world we inhabit with little known by these analytical models about a customer’s current preferences and circumstances.
The circumstances of markets, like those of individuals, can change in an instant. Products sell out, forcing consumers to choose from what’s available or to wait. Products stagnate.
Promotions and discounts alter the relative attractiveness of one product compared with another, stimulating sales of one and depressing sales of another.
Individual finances wax and wane as personal circumstances alter. Each and every purchase decision is a moveable feast. Even simple choices become rapidly complicated.
It is little wonder consumers throw their hands up and head for the safe harbour of brand, or convenience, or availability.
Focus on ‘small data’ instead
The data we should be analysing − small data − is product attributes and prices which change over time. This is the data consumers – your customers and your competitors’ customers – are using when choosing.
To the extent of their ability, each consumer is assessing, comparing and evaluating the products and services on offer. These are bundles of attributes with their corresponding “shadow prices”.
Trading this attribute off against that, trying to identify the best combination of attributes with their shadow prices to suit oneself. Taking into account one’s own dynamically altering preferences over the attributes and one’s own changeable circumstances.
What you should be doing is maximising the “willingness to pay”, that is the “consumer surplus”, of your potential customers. They will then tend to choose your product in preference to that of your competitors, depending on the bundle of attributes provided by your product.
Analysing customer data to minimise the error of estimation, isn’t helping your customers to solve their problems – it is proliferating them. The manifold combinations and permutations are adding to the burden, not lightening the load.
Customers will pay you with their custom, for simply reducing their search costs.
Faced as they are with overwhelming choice, customers want up-to-date, reliable, valid and trustworthy recommendations. These embody their own personal preferences and budgets, both of which are instantly available.
This article first appeared at BusinessThink.