What is Research
Francis Bacon first coined the quote “knowledge is power” in 1597. Hundreds of years later, today, in a world obsessed with being customer centric, knowledge of the customer is power. But what is the best way to gain knowledge? It is actually simple: research
The word “research” does not have many antonyms. In fact it only has the one: “ignorance” (according to thesaurus.com) and ignorance is defined as lacking in knowledge.
Consequently the opposite of ignorance is having knowledge and therefore by association:
“Research” is defined as the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions. In other words, “research” uses defined methods of questioning to reach an outcome – questions are therefore a key part of research
In this article I am going to look at questions that relate to a product. By a product I am mainly referring to a digital product. I will then explore Quantitative and Qualitative research, and how organisations can use them to obtain the requisite knowledge to design digital products that provide lasting and valuable experiences for their customers.
Research starts with the right questions
- What do people want to buy?
- How do we know people will (actually) buy what they say they want to buy?
- Should we provide people with the product they say they want to buy?
- What expectations do people have of the product?
- How well does our digital product meet those expectations?
- How do we make sure we design a product people want to use?
- What is holding us back from meeting those expectations & designing the product?
Research Methods
There are different types of research and thus choosing the right research method is an important step, as is understanding how the findings will be laid out. Let’s consider the different methods of research and how their outputs differ.
The terms “Quant” & “Qual” are banded around the digital industry in all sorts of contexts… to the point that these terms have become ubiquitous and commoditised and from my experience and observations, this has negatively skewed people’s understanding of what they actually mean in terms of research.
Quantitative (Quant) Research:
As the name suggests this type of research is about quantifying the findings, essentially that means the research will produce numbers in response to the question(s) asked. Examples of common quant methods include (but not limited to) surveys, benchmarking, AB testing and analytics. This type of research is typically more mathematical with statistics often playing an important role in deriving findings from a population sample.
Quantitative research explores “what, when, where”. For example a market research study may use a survey questionnaire to find out “what people want to buy” from a series of options. Asking the right question(s) should not be underestimated as Coke famously found when launching their “new” C2 coke drink. Estimates put the cost to Coke at around $50M. Despite surveying 200,000 people they failed to consider wider dietary trends and so while the quant data showed a robust “thirst” for the new drink it was a total flop! Likewise, when leveraging digital analytics, knowing what question(s) to ask before turning to the data is of the upmost importance if you want the data to be actionable, at a very minimum you need to ask yourself:
- What am I trying to find out?
- What can I measure and analyse to get valuable and meaningful results?
- Who is my analysis for?
There are many more questions within context you should be asking yourself!! Do not be lazy and just do data puking as Avinash Kaushik put it!
What Quantitative research really does have going for it is the robustness of statistical significance, so assuming you ask the right question(s), you can rely on the findings to be “truthful.” A simple example of this is AB Testing, which is essentially market research (little controversial, maybe, but true!!).
AB testing is all about promoting 2 or more variations to a population until statistical significance is reached proving which one the population prefers (market research). You can be confident the same will be true for the wider population. AB testing has grown into a massive industry, that alone is testament to the robustness of quantitative research as a decision-making instrument.
Qualitative (Qual) Research
The underlying aim of qualitative research is to get close and observe a phenomenon, in the context of this article that would be observing how a digital product performs when in use. The most common types of qualitative research are generally ethnographical, when researching digital products usability testing, diary studies, focus groups and concept testing are frequently used.
Qualitative research assesses behaviour to answers questions about “why” something occurs or “how” to fix/improve something (a digital product in this context). To achieve this, researchers get up close and observe, developing and formulating hypothesises that are then further evaluated. All research should be “objective” and “systematic” to produce unbiased evidence. Qualitative research therefore requires a high level of skill, it can be ever so easy without even realising for the researcher/observer to influence participants.
In contrast to quantitative research, smaller sample sizes are normally sufficient to formulate sound hypothesises. However, small sample sizes can (and often do) cause scepticism, such that the findings are dismissed. Objections to the findings normally go along the lines of: “they are not like our typical users” or “just because 5 users displayed that behaviour does not mean all users will.”
The first objection is (easily) overcome by recruiting the correct participants at the outset. The second objection is more common and in some ways less easily overcome, so let me outline why small sample sizes are perfectly reliable.
Jakob Nielsen demonstrated (as shown in his curve below) that 5-6 is the sweet spot in the number of participants when conducting usability testing. Beyond 6 the curve flattens, when considering mitigating facts such as time and budget 6 participants is enough to unearth close to 90% of the usability problems.