UX Certification
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
The concept of ‘strength of evidence’ plays an important role in all fields of research, but is seldom discussed in the context of user research. We take a closer look at what it means for user experience research, and suggest a taxonomy of research methods based on the strength of the data they return. — Philip Hodgson, October 2, 2017
By Philip Hodgson October 2, 2017 / ethnography, strategy, usability testing
Someone once said, “There are no questions in usability.” I think it was me. I admit it’s not a great quote. It’s not up there with, “Never in the field of human conflict…” or, “One small step for man…” but it’s a good one — and it leads to a useful rule of thumb for UX researchers.
Let me explain.
Some years ago while working for a large corporation I was preparing a usability test when the project manager called and asked me to send over the list of usability questions.
“There are no questions in usability,” I replied.
“What do you mean?” she asked, “How can there be no questions? How are you going to find out if people like our new design?”
“But I’m not trying to find out if they like it,” I pointed out in a manner that, in hindsight, seems unnecessarily stroppy, “I’m trying to find out if they can use it. I have a list of tasks not a list of questions.”
Requests to shovel explicit, “What do you think?” questions into UX studies betray the fact that not only do some stakeholders not understand the purpose of a usability test, but also that they believe all customer responses are necessarily valuable. It shows that they are unaware of the concept of good data and bad data, and, as a result, believe that all customer feedback is grist to the mill.
But it isn’t.
There’s good grist and there’s useless grist. Similarly, there’s strong data and weak data. This holds true for all fields of research, whether developing a new medicine, discovering a new planet, solving a crime, or evaluating a new software interface.
User Experience research is about observing what people do. It’s not about canvassing people’s opinions. This is because, as data, opinions are worthless. For every 10 people who like your design 10 others will hate it and 10 more won't care one way or the other. Opinions are not evidence.
Behaviours, on the other hand, are evidence. This is why a detective would much rather catch someone ‘red-handed’ in the act of committing a crime than depend on hearsay and supposition. Hence the often-repeated advice: “Pay attention to what people do, not to what they say.” It’s almost become a UX clich' but it’s a good starting point for a discussion about something important: strength of evidence. This is the notion that some data provide strong evidence, some provide only moderately strong evidence, and some provide weak evidence. You don't want to base your product development efforts on weak evidence.
Evidence is what we use to support our claims and our reasoning. It’s what gives us credibility when we make decisions about a specific design parameter, about product features, about when to exit an iterative design-test loop, about go/no-go decisions, and about whether to launch a new product, service or website. Evidence is what we present to our development team and what we bring to the table to arbitrate disagreements and disputes. Evidence is what helps us avoid making knee-jerk seat of the pants decisions. We back our reasoning with evidence based on good data. Data are the stuff of research. “Data! Data! Data!” cried Sherlock Holmes. “I can’t make bricks without clay.”
It may look as though UX studies are ‘method-first’ events (“We need a usability test”, “I want a Contextual Inquiry study”, “Let’s do a Card Sort”) but the UX researcher, focusing on the underlying research question, thinks ‘data-first’: What kind of data must I collect to provide credible and compelling evidence on this issue? The method then follows.
Strong evidence results from data that are valid and reliable.
Valid data are data that really do measure the construct that you think they are measuring. In a usability test, valid data measures things like task completion rate and efficiency rather than aesthetic appeal or preference.
Reliable data are data that can be replicated if you, or someone else, conducted the research again using the same method but with different test participants.
No matter what the method, research data must be valid and reliable or the data should be discarded.
In UX research, strong data come from task-based studies, from studies that focus on observable user behaviours where the data are objective and unbiased—effectively catching the user ‘red-handed’ in the act. Strong data come with a confidence level, and assure us that further research is unlikely to change our degree of confidence in our findings.
The following is a brief taxonomy of methods based on levels of evidence—actually it’s a taxonomy of the types of data that result from the methods. It assumes, in all cases, that the method has been well designed and well conducted. It’s not an exhaustive list, but it includes methods the UX researcher is likely to consider in a typical user-centered design lifecycle.
Strong UX evidence invariably involves target users doing tasks or engaging in some activity that is relevant to the concept being designed or the issue being investigated. It includes data from:
To qualify for this category, data should come from studies that at least include carrying out tasks—either by users or by usability experts, or involve self-reporting of actual behaviours. These methods are often a precursor to methods from the ‘Strong’ category. They fall into this category because the data typically has a higher degree of variability or uncertainty. They include:
Decisions based on weak or flawed data can cost companies millions of dollars if they result in bad designs, poor marketing decisions or false product claims. So the obvious question is, why would you ever design a study to collect weak data?
You wouldn't.
Data from these methods have no place in UX research. They result from methods that are either badly flawed or are little better than guesswork. If you can choose between spending your UX budget on these methods or donating it to charity — opt for the latter.
Start by asking these questions:
These are not trick questions: anyone presenting research findings should be able to answer them.
During a study you can ask yourself:
Some time ago I prepared a checklist for evaluating research studies. If you want to give a research study a good shakedown, you’ll find lots of useful checkpoints and questions there.
I started the article by promising a rule of thumb. Here it is. Use this as your mantra when evaluating the strength of user experience research: “Behavioural data are strong. Opinion data are weak.”
Dr. Philip Hodgson (@bpusability on Twitter) has been a UX researcher for over 25 years. His work has influenced design for the US, European and Asian markets, for everything from banking software and medical devices to store displays, packaging and even baby care products. His book, Think Like a UX Researcher, was published in January 2019.
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
This article is tagged ethnography, strategy, usability testing.
Our most recent videos
Our most recent articles
Let us help you create great customer experiences.
We run regular training courses in usability and UX.
Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.
copyright © Userfocus 2021.
Dr. Philip Hodgson (@bpusability on Twitter) has been a UX researcher for over 25 years. His work has influenced design for the US, European and Asian markets, for everything from banking software and medical devices to store displays, packaging and even baby care products. His book, Think Like a UX Researcher, was published in January 2019.
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
We can tailor our user research and design courses to address the specific issues facing your development team.
Users don't always know what they want and their opinions can be unreliable — so we help you get behind your users' behaviour.