UX Design KPI Examples: Learn How to Measure User Experience
mins to read
User experience (UX) focuses on the design and usability of a website, application, or product. Good UX means that the user can solve their problem or fulfill the need without too much difficulty. This leads to greater user satisfaction, a higher conversion rate, and fewer business costs.
Still, we all know that only user decides whether your product has good or bad UX. Then, how to understand that you and your design team are doing everything correctly and the product is going to provide a great user experience?
UX design KPI examples come in handy here. As Eleken is a team of product designers, measuring KPI performance is one of the components in our work on projects. UX KPIs allow us to measure the success rate of the product in numbers and therefore see how effective the product is. As well, in case we make some changes to the existing design, measuring the right indicators can show whether those improvements work the way we want.
As you may have already understood, in this post we want to discuss what metrics to use to measure user experience and therefore the success of your design solution.
As well, we'll:
- explain the difference between the two main types of UX metrics
- provide you with UX metrics examples
- tell you how to collect data for them
- tell how and when to use these design success metrics.
Main types of UX metrics
UX metrics help you to understand the current state of the UX so that you can decide in what direction to make the improvements. Generally, we divide design KPIs into two types: behavioral and attitudinal.
Attitudinal metrics focus on what users think and say about your product, while behavioral ones focus on customers’ direct interactions with your product. Over time, these indicators will help you track and compare the quality of your user experience.
There are many behavioral metrics, this list will provide the most helpful metrics to measure and track changes in the quality of the user experience:
Pageviews is an engagement metric that shows the number of pages the user has viewed on your site over a time period. It shows if your users are interested in some content on the website, or vice versa have trouble finding certain information. To add the context to this metric it’s best to combine it with other metrics we will discuss next.
- Time per task
Time per task determines how long it takes for the user to complete the task. To get the average TPT score we add the results of each respondent and divide it by the total number of respondents. In most cases, the shorter it takes for the customer to succeed, the better UX your product offers.
- Task Success
This KPI shows the percentage of customers who have successfully completed a specific task (for example, complete the profile, fill in the billing information).
How to calculate:
The more respondents you’ve got, the more accurate the result of Task Success is. As well, take into account if the user completes the task for the first time. This way you can track how their experience changes over time.
- Errors rate
Error rate shows how many times users enter incorrect information (make mistakes while completing the task). It allows you to understand how user-friendly your product is.
There are two ways to calculate the errors rate:
- If it is possible to make one error per task (or there are many error opportunities but you want to track only one) we calculate the error occurrence rate:
For example, three out of twenty users made a mistake when entering their password. We calculate the error rate as follows:
- In case it is possible to make several errors per task you can calculate an average error occurrence:
For example, 5 users were filling in the billing data, this task has 6 error opportunities. User 1 made one mistake, user 2 - three mistakes, user 3 made no mistakes, user 4 made two mistakes, and user 5 - two mistakes. And we calculate the average error occurrence rate:
- Bounce Rate
Bounce rate shows how often users give up on a task, for example filling out payment details. To learn the reason why users bounce, you should combine this metric with some of the attitudinal metrics we are going to discuss below.
Collecting data for behavioral metrics is quite easy. Moreover, in automatic mode, without involving an interviewer or observer in the process. You can collect data for behavioral metrics in web analytics and application analytics, based on user sessions on the site, search history, bug tracking, and so on. So this is an easy and inexpensive way to start tracking UX metrics.
You can also track these metrics with the help of other UX research methods: observation, A/B testing, eye tracking, usability testing.
All these metrics are, of course, important, but they do not give a complete picture and understanding of why you are getting these numbers. And this is where attitudinal metrics come into play.
Attitudinal metrics measure what people say and how they feel about your product. There are fewer of these indicators than behavioral ones, but they are not of less importance. Here are some of them:
SUS (System Usability Scale)
This metric is widely used among UX designers and researchers. It is based on a survey that aims at evaluating the ease of use of a site or product. The survey consists of 10 questions, which the user should answer with a score from 1 to 5 (ranging from strongly disagree to strongly agree).
You can also use this metric to compare your product with competitors’ or with your previous version before improvement. For this purpose, you take each score from respondents, add them together and multiply by 2 to get from 0 to 100 points. The average SUS score is 68.
If you get 68 and more points then everything is OKAY with the usability, in case you’ve got lower than 68 - your product requires optimization.
CSAT (Customer Satisfaction)
It is often important to be aware of the overall level of user satisfaction concerning everything from features to app functionality. UX satisfaction can be measured using the CSAT - Customer Satisfaction score.
CSAT can give you a general idea of how users feel about your product, or it can provide you with more detail on specific features or stages of the customer journey. Typically, the CSAT is based on a scale from 1 (very dissatisfied) to 5 (very satisfied) and asks a question “How satisfied are you with the service/app?”.
But you can also be more specific and ask something like “How satisfied are you with finding the desired good?”, and such
To calculate the percentage of satisfied users, divide the total number of satisfied users (those who voted 4 or 5) by the total number of respondents and multiply by 100.
(Satisfied users/Total number of respondents) x 100 = percentage of satisfied users
NPS (Net Promoter Score)
If users tend to recommend your product, app, or website based on their experience, then your UX is probably good.
To track the NPS you need to ask users only one question: How likely are you to recommend this service/app/website to your friends and colleagues?
Users put the score from 1 to 10, where one stands for “not at all likely” and ten means “very likely”.
According to the results, we divide users into three categories: detractors (those who put from 1 to 6 points), passives (7-8), and promoters (9-10). And calculate the NPS by subtracting the percentage of detractors from promoters.
How to collect data for attitudinal metrics
There are many ways to measure attitudinal metrics but the most popular are polls, user interviews, and widget buttons.
The easiest, most efficient, and least time-consuming way to collect this kind of data is with a CTA you place on a website or app. Users click it whenever they want. You can install such CTAs along the entire user journey or in its specific part.
Polls, unlike a button, are not activated by users, but by your app/website. They tend to be highly targeted, allow you to ask more questions at a time, and segment your respondents.
The two most popular types of polls are slide-out (slides out to the side of the screen) and full-screen (appears right in the middle of the screen). They are also used to recruit users for in-depth interviews or surveys.
User interviews are an easy and effective way to get data/feedback from the customer. During the interview, you ask a user questions on a certain topic that you’ve prepared beforehand.
This method helped us a lot when we were working on the redesign of Gridle, a client experience platform. We conducted six interviews to gather information about their needs and priorities. We transformed the insights from these interviews into an empathy map (you can see it below) to get a deeper understanding of customers.
Remember that quality metrics are not enough to make UX improvement decisions. You will need to "enrich" this data with context and details that are missing. Behavioral and attitudinal UX metrics alone cannot provide answers to all questions.
At the end of surveys, ask open-ended questions so that users can justify their answers and give you more information. This is the only way to understand what experience they received, at what point it was good, and at what point something went wrong.
How to choose the right UX metric
It is impossible to create an objective list of, let us say, "5 best UX metrics to track". There is only a classification into behavioral and attitudinal KPIs for user experience. And first of all, when choosing what to track we should take into account what is important for your customers, your business, and the user experience that you want to measure. And with everything else, Google’s HEART framework will help you.
In 2010, Google experts wrote an article about the framework that helped them choose the right metrics for 20 different products. The essence of Google HEART is to effectively combine behavioral and attitudinal metrics.
HEART stands for Happiness, Engagement, Adoption, Retention, and Task Success. If you look at the description of each item below, you will realize that each of them is either behavioral or attitudinal:
- Happiness includes attitudinal metrics: CSAT, NPS, and SUS.
- Engagement includes usage metrics such as visits per user per week, number of photos each user uploads per day, average session length.
- Adoption and Retention include metrics such as the number of unique users over a while (to differentiate new (adoption) and existing/returning users (retention).
- Task success includes behavioral metrics such as task success rate and error rate.
All of these metrics are useless if they are not tied to some kind of user goal. For example, if your site visitors spend a lot of time on your website, this does not mean that your UX design is good. On the contrary, it can mean the opposite - they spend a lot of time just to complete a simple task.
So, firstly define the user goal (What do users want to achieve? How does the product help them reach their goal?) and following it choose the appropriate metric.
“If you cannot measure it, you cannot improve it” – Lord Kelvin
Without constantly tracking user experience KPIs, it's difficult to understand if you're on the right track and that the work you do is meaningful and rewarding.
Don't miss the opportunity to use real-time feedback from users. Use both behavioral and attitudinal metrics to measure, compare and track the quality of the user experience over time. UX metrics will also allow you to see how product changes affect customers and the business itself.
And of course, measuring the success of the user experience alone won’t help to ensure that your business is doing great. Read about key SaaS metrics to measure the right indicators and keep your business on track.
14 Essential UX Research Methods and How They Are Used
Every delightful and every frustrating artifact that exists in the human world, exists thanks to a series of design decisions. The difference between the delightful and the frustrating design lies in the area of research. Wait, research?
“Research” sounds like money you don’t have and time you can’t spare sitting on your butt instead of moving forward and creating something. Once a project is born, it’s already over budget and behind the schedule. The startup gold rush makes us racing faster, harder, stronger through the roadmap. So it looks like we need to cross the research off the list.
But for a design to be successful, it must serve the needs and desires of actual humans. And unfortunately being a human is not enough to understand almost 8 billion other humans on the Earth.
Every time you make a product decision, you are placing a bet. You risk doing something wrong (or doing something right but in the wrong way). With guesswork, your chances for success are fifty-fifty. Either you guess it, or you don't.
Research is the ace up your sleeve you can play to avoid a costly mistake. The more you learn, the better your chances are. Rather than piling on the costs, user experience research can save you a ton of time and effort.
It seems obvious why user research matters until you have to prove this necessity to clever people with revolutionary business ideas. As a UI/UX design agency, we even have a doc called “How to explain to clients that some time should be allocated to research.”
This post exists to help you figure out what UX research is, how it fits into the user experience research process, and how to do user experience research if you’re already over budget and behind the schedule.
What is user experience research?
Have you ever seen the cartoon Hedgehog in the fog? It’s a Soviet 10-minute cartoon, unfamiliar for foreigners, that always leaves me in tears.
A hedgehog makes his regular evening journey to his friend, bear cub. Finding his way through the forest, he sees an unfamiliar fog bank. Getting off the path, the hedgehog curiously inspects the fog and gets completely turned around. A falling leaf terrifies him, bats scare him, and a weird owl tags along with him. Mysterious strangers and a pinch of luck help him find the right way.
It reminds me of starting a design project. Every time we develop something new, we stand at the frontier of knowledge, in front of the fog. To design, to write, to code the best solution ever existed for the problem we’ve just faced, we have to embrace danger, plunge ahead into the unknown, exposing ourselves to criticism and failure every single day.
You can be brave, and jump right into the fog with fingers crossed. Or you can remain on shore waiting till the smoke clears. What else you can do is to let a firefly light your way — just enough for a better view of your surroundings. UX research is your firefly.
Erika Hall, in Just Enough Research, defines UX research as a systematic inquiry. You want to know more about the foggy topic in front of you, so you go through a research iteration to increase your knowledge. The type of research depends on what and when you want to know.
Types of UX research and how they can benefit you
There are many, many ways to classify types of user research. The one I've chosen for you helps to understand what kind of research can be useful at different stages of your design process.
Generative UX research
You run the generative UX research to find the endpoint of our design project when staying in front of a fog bank. Such research leads to ideas and helps define the design problem. The generative toolkit includes googling, reviewing all the existent solutions in the niche, conducting interviews and field observations.
We, as a design agency, rarely have to deal with generative research. Take one of our clients, TextMagic. Originally, the app helped companies connect with clients via text messages. But the team figured out that their audience would appreciate some new features for marketing, customer support, and sales. This is when they turned to Eleken — when a round of generative research was in order.
Descriptive user experience study is our alpha and the omega, and the bright morning star. This is what we do when we already have a design problem, aka our endpoint. We’re looking for the optimal way to the point — the best way to solve the problem that was identified during the generative research phase.
To find the optimal way, we need to put ourselves into the users’ context — to ensure that we design for the audience, not for ourselves. Based on your goals, resources, and the timeline of the project, you can choose from a wide landscape of user research methods to gather the info you need. Look how we did descriptive research for Gridle, a client management app that came to us for a redesign.
We figured out that the Gridle team used Inspectlet, a session recording app, for their internal web analytics. So we got a chance to examine recordings of how visitors were using Gridle.
With zero research budget and in the shortest term possible, we understood which features users couldn't live without and which ones they didn't mind skipping. Just as if we were looking over their shoulders. Thus, we’ve learned what was good and what could be improved.
Next, we wanted to understand how we should improve the app to make it more valuable for users. Gridle had a strong customer base on Facebook, so it was easy to find volunteers for one-hour user interviews. As a result, we could understand and prioritize users’ needs, and transfer them to an empathy map.
Once we have a clear idea of the problem we're trying to solve, and the way we’re going to solve it, it’s time to roll up our sleeves and start working on potential solutions. In the process, we need to check how we are doing to fix any issue before it causes further mistakes.
When you’re doing such ongoing testing, you’re doing evaluative research. It works best when you test your progress iteratively as you move through the design process. The most common method of evaluative research is usability testing, but any time you put your solution in front of your client or the audience, the feedback you get counts as a round of evaluative research.
As your app or website is live, you may notice that people behave unexpectedly. Maybe something went wrong, or surprisingly good. When you want to understand what happened, you resort to causal research.
For instance, we at Eleken have figured out that a part of our leads isn’t a great fit for our business model. We’re focusing on UI/UX design for SaaS apps. That’s what we know best, and that’s what we are brilliant at. Yes, we can help our loyal customers with marketing design, for instance, but if a notable part of leads comes to us for marketing design specifically, there’s something to be adjusted inside of our landing page. The task of casual research here is to find an element that needs to be adjusted.
When it’s just enough research
No UX research is one extreme. The opposite extreme is nonstop inconclusive testing of random things, like button colors or fonts in pursuit of a good user experience. Doing irrelevant research, you risk ending up disillusioned or losing organizational support for any experiments.
You can’t test everything, and you’ll never reach 100% confidence in your design until it’s live. That’s a little rush of adrenaline that makes our job so satisfying. When thinking of a research round, ask yourself, is this absolutely necessary to do?
How to do user research in UX? To make the best use of your time, try to measure the importance of UX research projects in terms of hedging risks. Imagine, what bad thing would happen if in half a year from now you’ll realize that:
- you were solving the wrong problem,
- you were working on the feature that doesn’t actually matter for users;
- you were wrong about your users’ habits and preferences.
If you’re not sitting in a cold sweat now, that is probably not your top-priority research.
How to start a UX research project
What is key in a user research? It’s your objectives that define what you’re doing, why you’re doing it, and what you expect from the UX research process.
As soon as you are ready with objectives, you start looking for appropriate research methods. It can be interviews or focus groups, A/B testing or usability research techniques, it all depends on the goals set and your resources.
Here is the list of our favorite UX research methods we use regularly to answer those questions.
Used: to learn users, their feelings and habits deeply
+++ may open new insights in the areas that were out of the attention of the researchers
- - - depends on how motivated and dedicated the users are
This is the ultimate UX research method that lets you get inside the mind of the users. For a diary study, you have to ask users to write a diary for a period of time. The diary would contain all the reflections related to the subject of the study: thoughts, actions, emotions, desires, etc. It can last for a week or more, depending on the subject and the time available.
Diary study works great at the initial stages, when it is important to understand well users goals, jobs-to-be-done, and problems. Collected information makes a solid foundation for the user persona.
Ethnographic (field) research
Used: to see how users interact with the product
+++ studies real situations, not modeling
- - - not always accessible
Ethnographic Research (aka Contextual Inquiry) is a process of observing users in their natural environment, analyzing their ways of acting in certain situations. It is the same process that an ethnographer does, but with a very concrete focus on the product, activity, or problem that the UX researcher is interested in.
Observing people in real-life situations is not always feasible. For example, visiting a bank headquarters to study how employees use the CRM system is easier than observing how people use dating apps.
Mouse tracking & click tracking
Used: to test a prototype or find issues in the ready product
+++ can collect data about behavior patterns of a large number of users
- - - risk of incorrect conclusion
Compared to other user research techniques that involve a researcher following the user interactions in real-time or in screen recording, this method allows a UX researcher to process more data from a large number of users and see the major tendencies of user interactions. To choose the right software for that, check out our list of best UX research tools.
Here are some of the insights that heatmaps of mouse tracking reveal:
• What parts of the interface have the most clicks?
• What buttons have fewer than expected clicks?
And so on. Click heatmap doesn’t give direct answers, but it certainly highlights the areas that need some improvement.
Used: to test user interface
+++ highly precise
- - - requires special technical resources
Just like with click tracking and mouse tracking, there are hints that need the right interpretation. Why do users spend so much time looking at the headline? Is it because the font is so beautiful or because the text is hard to read? Or both?
If eyetracking sounds like a thing from an anti-utopia novel, don’t worry. It is a relatively new technology, but it does not require very sophisticated devices. Unlike some other techniques described here that require just a researcher, a user, and a notebook, this one can’t be done without special software. However, it is more affordable than you would expect. Eye trackers use cameras, projectors, and algorithms to catch the user’s gaze.
While click tracking shows actions that involve thinking and intention, eyetracking captures the reactions that might be hard to reflect on, and therefore would not appear in user interviews. Like when people tend to focus too much on the picture that is supposed to be just a background to the text.
In-depth interview (IDI)
Used: at any stage
+++ allows to get lots of insights and be flexible when asking questions
- - - takes a lot of time to cover many respondents
As you may guess, this method of UX research implies one-on-one talk between the researcher and the user. There are two types of interviews: directed (following a prepared list of questions) and non-directed (letting the interviewee talk about their experience, with as little interruption as possible). The latter technique gives an opportunity to find some insights about the user experience that the researcher was not aware of.
When you have the list of questions ready, estimate the duration of the talk and inform the interviewee in advance.
Used: at any stage
+++ random but well-targeted selection of respondents
- - - hard to get detailed information since people may not be ready to dedicate much time to it
To run this type of interview, the researcher has to “catch” users or potential users in the place of their natural habitat, in a situation when they would be using the product. This type of interview has to be short, but it can be combined with field research to provide more information.
Let’s say we want to see how people interact with a supermarket loyalty app. To do this, we go directly to the supermarket, watch people using it, and ask questions.
Used: at any stage
+++ Cheap and accessible
- - - Risk of non-response error (you miss the valuable input of people who are frustrated with the product or just don’t want to fill in email surveys)
This is one of the most natural ways to reach a large number of target customers. It is much easier to get people to answer a few questions than going for an hour-long interview. Needs no coordination in time and space, no geographical limits.
Email survey works best with an existing database of users. When you are doing a UX research for a new product without a customer database, you have to be sure to send out your emails to contacts that belong to the target audience. You can include a couple of questions regarding demographics to know whether their profiles are relevant to the product.
Email surveys don’t have to be paid, but to increase the amount of filling in surveys, you can give small presents to those who finished it.
Used: to understand what users think of an existing product
+++ captures the experience of real users at the right moment
- - - possible only when the product is already out there and functioning
This survey appears on the page right after the user has interacted with the product. This way, very direct questions can be asked like what was the user intent, whether they succeeded, and what were the issues. An on-site survey allows the research to cover any segment of users: those who are using a particular feature, or those who exit the website without purchase, and so on.
Surveys are some of the most common and easy to execute UX research techniques. With a survey, you can collect both quantitative and qualitative data with close-ended and open-ended questions. However, trying to insert too many questions is dangerous: the longer the survey, the fewer the responses. Good practice is to warn users how long the survey will take before it starts.
Used: to discover users needs and feelings
+++ Takes less time compared to individual interviews
- - - Hard to conduct online
A focus group is when a researcher has a conversation with a group of users at the same time. The average number of participants is 6-9 persons. Focus group is not just for saving time on personal interviews: the results can vary. People behave differently when they are around peers.
Working with a focus group requires special preparation: knowledge of psychology helps create the right atmosphere and get valuable insights.
Used: when building informational architecture
+++ requires little preparation
- - - the results may be inconsistent and hard to analyze
Card sorting is a method that helps build the very fundamental architecture of the product. All the main units are written on separate cards and users are asked to sort them into categories. This tool prevents designers from blindly following habitual structures that they have used before.
Used: when you have to verify information architecture or test how it works with user tasks.
+++ works both online and offline
- - - only tests informational architecture without taking into account other factors
This method can be the next step after card sorting or can be used separately when the informational architecture is already created and needs to be verified.
To start, you present a complete hierarchy of all the categories. Then, the researcher asks the user to find a particular category.
Try to avoid giving direct indications, like “Find UI/UX services”. Let’s imagine we are testing the navigation of this website. The task may sound something like “You are about to launch a SaaS startup and you are looking for designers to make an MVP. What page would you go to?”.
Competitors analysis and benchmarking
Used: at the initial stages of development and when analyzing the existing product
+++ good tool for finding product-market fit
- - - excludes real users
Finally, there is a UX research method that doesn’t require talking to strangers. Seems like an obvious step in developing a product, but you’d be surprised to find out how many product owners skip deep research and rely on what they know already about the market.
Why do you need in-depth competitors analysis? First of all, it saves you from reinventing the wheel. Sometimes when you commit too much to design thinking, you end up crafting a solution that is already present on the market. Secondly, analyzing competitors helps you find their weak points that you would address, and define a value proposition that will make your product stand out.
Used: to analyze how user-friendly the product or prototype is
+++ allows to see the interaction and talk to users to understand them better
- - - limited amount of users studied
Usability testing is how most people imagine UX research. A researcher following a group of users while they are performing tasks with the product. Usability testing also includes asking questions to understand the motives of the actions.
Based on the results, a researcher can define potential issues and solve them in the next iteration.
Used: to compare two versions of a solution
+++ shows clearly which version is chosen by the majority of users
- - - hard to execute in some cases
For the A/B test to work, a group of users has to be divided randomly in two. Two versions of a product are offered to each group, and the results compared to understand which one performs better. A/B testing can be executed on its own or in combination with another UX research method: for example, tree testing of two different hierarchies.
It is important to make the A and B versions not too varied so that the results of the study wouldn’t be interpreted adversely.
This list is not exhaustive, there are new methods and tools appearing constantly in the world of UX design. Each stage requires different techniques, and it takes time and experience to figure out which one works best for a particular case.
Still wondering if you need all of it for your project? Ask our professionals, they know all what, which, when, and whys of UX research. Drop us a line!
Unlock the Power of UX Research Process to Design With Confidence
The important word here is “systematic”. The methodical process of designing a research project is what saves your precious time and brain, and helps to get maximum value from research.
Sounds good but flat, let’s try to visualize our research process.
Seems like any process can be shown as a line. Every time we at Eleken write an article like the one you’re reading, or a case study about a new product Eleken’s team has designed, we’re trying to stretch out the research our designers have done into a chain of steps.
And every time designers argue that they are rather buzzing around their research subject than moving from point A to point B. Thus, the UX discovery studies feel more like a loop, where you discover, define, design, and rerun it all one more time. But a closed loop as a route gives us a migraine. It shows no progress.
Let’s better make our research process model look like a spiral with many loops. We start with wide discovery research when we know nothing about the product, the audience, and the market to learn a few rough ideas on the topic, then spiral back, making connections between the ideas. Then keep lurking around, again and again, gradually adding new insights, validating or discarding our assumptions, and making more connections, checking if each idea is consistent (or inconsistent) with the users’ expectations.
With every new loop, we’re getting closer to our goal — the perfect fit between the product and the audience’s needs.
Now, when we understand the approximate path, let’s see what elements it consists of.
What are the 4 stages of UX research process?
The research process consists of single studies we conduct to learn something new. Every study is a set of steps, whether it is a usability test, benchmark study, or user interview,. A convenient way to identify those steps provides a DECIDE framework, that stands for the six steps in conducting user research for effective UX:
- Determine the goals,
- Explore the questions,
- Choose the methods,
- Identify the practical issues,
- Decide how to deal with ethical issues, and
- Evaluate the results.
If you need some tips for running a specific study, check out this UX research plan template, it digs much deeper into this topic.
When you face a challenge to design or redesign an app, you need to string a series of such specific studies into a system that will help to gain all the knowledge you need to get the job done. The first step here is to clarify what knowledge you are looking for. At different stages of product development, you need different insights.
Say, you might want to get a sense of your users’ problem to solve it via your app. Or you need to test the product prototype to see whether you’re moving in the right direction in terms of usability. Finally, you can ask for feedback when everything is ready to see if any improvements need to be made.
We can split our research into four phases according to our intent — to discover, to explore, to test, and to listen to the reaction. Let’s look at each of the phases in detail, and see how they fit into the overall product design project timeline.
User research discovery phase
The discovery phase is a way to deal with the uncertainty that is inevitable at the onset of any project. To beat the uncertainty, you’re googling and doing qualitative interviews to collect and analyze information about the app, its audience, and intended market.
Discovery helps to clarify the goal and the direction of further movements. If your assumptions make you do a wrong thing or a right thing but in the wrong way, this stage is your chance to figure things out.
It appears from the above that discovery research works especially well when done prior to design itself before efforts are wasted. But you can return to discovery research anytime you need to.
Exploring research phase
During this phase of exploration, you dig deeper into the topic to solve applied problems of design that appear in front of you in the working process.
You compare your features against competitors and detect their user experience shortcomings that you need to rectify within your own app. You split your audience into personas and build user flows to define risky areas for losing customers along the way. You analyze users’ tasks to find ways to save their time and effort with your design decisions.
This research phase overlaps with your active phase in the design process. Whenever you need to validate your design assumption, you use one of the exploring methods.
Testing research phase
The research to ensure that your design is easy to use is mostly done as usability testing.
Nielsen Norman Group teaches us that if you can do only one activity in an effort to improve an existing system, you should choose moderated usability testing, where the person interacts with the interface while continuously verbalizing their thoughts as they move through the tasks.
Thinking aloud usability tests sound easy and cheap. You recruit representative users, give them tasks to perform, let them do the talking, and sit nearby absorbing the insights. That's how it worked in the Pre-pandemic era. If you want to run such research remotely, let me recommend Lookback, one of the remote moderated usability testing tools we at Eleken use. Check out our list of UX research tools that can save the day.
Testing research happens repeatedly during the design process and beyond so that you have time to make changes to your design if the test shows that such changes will benefit the product.
You can’t anticipate everything by testing your interfaces on small samplings. Your final and your most reliable test team is your actual users. So after your product is released, you should listen carefully to the feedback and monitor user problems, successes, and frustrations.
This observation may trigger a new circle of design and development changes called to improve the user experience even more.
When to use which UX research method
There’s a broad list of UX research methods that can answer the questions you ask yourself within each of the four phases of your research. If you want to get to know them better, we have a whole detailed article about UX research methods. However, understanding methods is only half the battle.
Projects with such budgets and timelines that allow using the full set of methods exist only in our dreams. Life is about making choices. Sure you can use one or two familiar methods all the time, but would they give a perfect mix of data every time given that no two apps are identical?
To help you choose the right method, Nielsen Norman Group suggests using their three-dimensional framework that is so good I’m jealous it wasn’t me who came up with this.
Here we have 20 methods mapped across the frame with the following axes:
- Attitudinal ↔ Behavioral
- Qualitative ↔ Quantitative
- Context of Use
The attitudinal vs. behavioral distinction helps us identify the gap between what people say and what people do. Usually, on discovery and exploring phases you need self-reported data, gathered from interviews and card sorting. Behavioral data is especially useful when you’re testing your interfaces.
Now, let’s explore the difference between qualitative and quantitative methods. Qualitative studies observe the event or behavior directly, as is the case with focus groups. They are perfectly suited for answering questions about why or how to fix a problem. Quantitative studies gather the data indirectly, through an analytical tool, for instance. Thus, they are useful when your questions start with how many and how much.
Finally, the context of use means that depending on the phase where you are in your designing process, you can run tests without any product, with a scripted version of the product, or with the actual product when it’s (almost) ready.
Let’s say we’ve started doing a website redesign and need to figure out how many weak spots are there to fix. We’ll use Google Analytics or Hotjar to figure out what frustrates our users. Next, we have a few hypotheses on how to fix the issues. We make paper prototypes and find five volunteers for usability lab studies.
Knowing what you want to ask, and what context of use you can afford at the stage where you are, the problem of choosing the right method is not a problem anymore.
Now it’s your turn.
Want to dig further into the user experience topic? Here is an article about human-centered design — our North Star that we aspire to when conducting UX researches.
For more research-focused reading, check out this article about design audit, it is full of usability checkup tricks and you can see how we run researches at Eleken UI/UX design agency.