Nuance wants to know how people feel, how brands are conceptualized and understood, what unknown drivers drive human behavior, and what drives business success. Numbers want to know exactly how big our markets are, what people buy, what price they pay, what path they take and what drives business success.
The difference between these two viewpoints is most pronounced in Insights and Market Research, where individuals proudly describe themselves as ‘quantitative researchers’ or ‘qualitative researchers’. Now I understand the need for specialization, but sometimes we forget that the subjects of the research – people, products and brands – are the same.
Here’s a thought experiment: Imagine you’re a brand researcher, and I give you a file of a million tweets. Your job is to extract insights from this cache and use them to drive the business forward. I then ask: “Do you do quantitative or qualitative research?”
Zen koans and the art of inductive/abductive reasoning
The question is one koan because both answers are equally right and equally wrong. And, like many good koans, the solution is not to ask the question. Do that, and you will see that the premise of the question – the duality of the two types of inquiry – is the error.
One reason this thought experiment confuses us is that we associate quality with depth and quantity with scale. Yet the data here has both depth (the messiness of random human thoughts) and scale (the million observations).
Dig Deeper: What is Predictive Analytics?
To get nuance, we need depth. We need to interact with individuals and use observations to discover patterns and then form our hypotheses and theories. This inductive reasoning is at the heart of most qualitative approaches. To make it work we need depth; twenty questions on a questionnaire are not enough. We have to explore and dive. That’s expensive to do, and complex and expensive to synthesize. This is why Qual is usually associated with small sample sizes.
For numbers we need scale: the more appropriate the data, the better. We then apply analysis and deductive reasoning. We start with a theory and hypotheses and use (frequentist) statistical methods to confirm them (or not). These statistical techniques require large samples and consistent data. Therefore, the technologies and platforms to support them have focused on quantitative research as an automation and repeatability exercise. That is why quantitative research has traditionally been dominated by the problem of scale.
The problem with dichotomies
You will often see tables like the following offered to summarize the differences:
| Method | Qualitative | Quantitative |
| Result | Nuance | Numbers |
| Approximation | Synthesis / Inductive reasoning | Analysis / Deductive reasoning |
| Challenge | Depth | Dish |
We accepted this division not because it was ideal, but because it was necessary. That said, many proponents of mixed-mode methods combine the approaches and build workflows that use the best of both.
The rise of abductive reasoning changes the quantitative/qualitative dichotomy. In abductive reasoning we go beyond just looking for patterns and ask ourselves, “What is the most likely explanation for this (surprising) observation?” In contrast, many researchers use what’s called an “abductive loop,” where they collect data, notice a surprise, form the best explanation, and then collect more data. In a similar way, quantitative methods now often move beyond frequentist approaches (hypothesis testing) and toward Bayesian methods, which use data to successively update our beliefs about what is going on.
Dig deeper: how to make sense of all that marketing data
However, both are still bound by the dichotomy between depth and scale: abduction requires deep observations; Bayesian depends on the data volume.
Take everything home again
Fortunately, new technologies and models ensure that we can gain depth at scale. It wasn’t that small sample sizes were needed; it was simply too expensive to collect the observations and more difficult to synthesize the results if there were too many subjects. Small sample size does not define quality. They are just a technological limitation.
Qualitative data collection at scale is now fully feasible (either as primary interviews or as mineable data – like the tweet example), thanks to newer AI approaches. Specifically, generative AI/LLMs can perform computational abduction, meaning it has the reasoning power to perform the necessary synthesis.
The gap between quality and quantity will fade in the coming years as we realize that the old dichotomy between depth and scale is no longer a limitation. We will see a shift in how data is collected and aggregated and how insights are gained – without a trade-off between depth and scale. We will no longer define our research by these binary constraints, but instead derive the truth about our customers, products and brands through simultaneous depth and scale.
Energize yourself with free marketing insights.
Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the supervision of the editors and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. The contributor was not asked to make any direct or indirect mentions of it Semrush. The opinions they express are their own.
#longer #choose #insight #impact #MarTech


