/
Design process

Measuring the Intangible. Usability Metrics

0

mins to read

How do we evaluate the design? Typically, the first ones to approve it are the designers themselves. After that, other decision-makers give their opinion. Then, some users test it and give their feedback.

Having a couple of levels of approval is great, but all those opinions are rather subjective. As a design agency, we always promote a data-driven approach. And in the case of measuring user experience, usability metrics are essential data.

What are usability metrics?

Usability metrics are a system of measurement of the effectiveness, efficiency, and satisfaction of users working with a product. 

To put it simply, such metrics are used to measure how easy and effective the product is for users.

Most usability metrics are calculated based on the data collected during usability testing. Users are asked to complete a task while researchers observe the user behavior and take notes. A task can be "Find the price of delivery to Japan" or "Register on the website."

The minimum number of users for measuring usability is 5. Jacob Nielsen, the founder of "Nielsen Norman Group," recommends running usability testing with 20 users.

Let’s take a closer look at the most used usability metrics. We’ll start with the metrics for effectiveness measurement.

Success score

However long your list of usability metrics is, the success score will probably be at the top of the list. Before we go into the details of usability, we have to find out if the design works. Success, or completion, means that a user managed to complete a task that they were given.

The basic formula for the success score is:

Success score

The success score would be between 0 and 1 (or 0 and 100%). 0 and 1 are not just simple numbers. In this binary system, these numbers refer to the task being completed successfully or not. All the other specific situations are overlooked. Partial task success is considered a failure.

To have a more nuanced picture, UX researchers can include tasks performed with errors in a separate group. For example, the task is to purchase a pair of yellow shoes. The "partial success" options can be buying a pair of shoes of the wrong size, not being able to pay with a credit card, or entering the wrong data.

Let's say there were 20 users, 10 of whom successfully bought the right shoes, 5 chose the wrong type of delivery, 2 entered their address incorrectly, and 3 could not make the purchase. If we were counting just 0 or 1, we would have a rather low 50% success score. By counting all kinds of "partially successful" tasks, we get a whole spectrum.

Note! Avoid counting "wrong address" as 0,5 of success and adding it to the overall average, as it distorts the results.

Each "partially successful" group can tell us more than a general success score: using these groups, we can understand where the problem lies. We expect this more often from qualitative UX research, while quantitative gives us a precise but narrow-focused set of data.

For you to consider that a product has good usability, the success score doesn't have to be 100%. The average score is around 78%.

Now, let's move to the second of the most common usability testing metrics:

Number of errors

In user testing, an error is any wrong action performed while completing a task. There are two types of errors: slips and mistakes.

Slips are those errors that are made with the right goal (for example, a typo when entering the date of birth), and mistakes are errors made with the wrong goal (for instance, entering today’s date instead of birth date).

There are two ways of measuring errors: measuring all of them (error rate) or focusing on one error (error occurrence rate).

To find the error occurrence rate, we have to calculate the total number of errors and divide it by the number of attempts. It is recommended to count every error, even the repetitive ones. For example, if a user tries to click an unclickable zone more than once, count each one.

Error rate

Error rate counts all possible errors. To calculate it, we need to define all possible slips and mistakes and the number of error opportunities. This number can be bigger or smaller depending on the complexity of the task. After that, we apply this simple formula:

Error occurrene rate

Can there be a perfect user interface that prevents people from making typos? Unlikely. That is why the error rate seldom equals zero. Making mistakes is human nature, so having usability testing errors is fine.

As Jeff Sauro states in his "Practical Guide to Measuring Usability," only about 10% of the tasks are completed without any mistakes, and the average number of errors per task is 0,7.

Success score and error rate measure the effectiveness of the product. The following metrics are used to measure efficiency.

Task time

Good usability typically means that users can perform their tasks successfully and fast. The concept of task time metric is simple, yet there are some tricks to using it efficiently.

Task time

Having the average time, how do we know if the result is good or bad? There are some industry standards for other metrics, but there can't be any for task time.

Still, you can find an "ideal" task time. It's a result of an experienced user. To do this, you have to add up the average time for each little action, like "pointing with the mouse" and "clicking," using KLM (Keystroke Level Modeling). This system allows us to calculate this time quite precisely.

The task time metric is often measured to compare the results with older versions of the design or competitors. 

Often, the difference in time will be tiny, but caring about time tasks is not just perfectionism. Remember, we live in a world where most people leave a website if it's not loading after 3 seconds. Saving those few seconds for users can greatly impact their user experience.

Efficiency

There are many ways of measuring efficiency. One of the most basic is time-based efficiency, which combines task time and success score.

Efficiency

Doesn’t look basic, right? Not all formulas are easy to catch. It would take another article to explain this one in detail.

Tracking metrics is a whole science. If you want to dive deep into it, check out our list of best books about metrics (or leave it to the professional UX designers).

Now that we have figured out how to measure both effectiveness and efficiency, we get to measuring satisfaction, the key to user experience studies.

There are many satisfaction metrics, but we'll bring two that we consider the most efficient. For these metrics, the data is collected during usability testing by asking the users to fill in a questionnaire.

Single Ease Question (SEQ)

This is one of those easy and genius solutions that every UX researcher loves. Compared to all those complex formulas, this one is as simple as it gets: a single question is asked after the task.

SEQ
Image credit: measuringu.com

While most task-based usability metrics aim at finding objective parameters, SEQ is tapping into the essence of user experience: its subjectivity. Maybe the task took a user longer to complete, but they had no such impression.

What if the user just reacts slower? Or were they distracted for a bit? User's subjective evaluation of difficulty is no less important than the number of errors they made.

On average, users evaluate task difficulty at 4.8. Make sure your results are no less than that.

System Usability Scale (SUS)

For those who don't trust the single-question solution, there is a list of 10 questions known as the System Usability Scale. Based on the answers, the product gets a score on a scale from 0 to 100 (each question is worth 10 points).

System Usability Scale
Image credit: Bentley university.

This scale comes in handy when you want to compare your product with the others: the average SUS is 68 points. Results over 80 are considered excellent.

Why care about usability metrics?

Meme with a man on Moon complaining about usability

The basic rule of user research states that conducting about three user interviews gives us a big chunk of info about the product's usability and usability problems. But why bother measuring quantitative UX metrics?

Well, if this is the first time you run user tests, you should stick to qualitative tests, of course. The foremost is to get to know the users well. However, when a company gets serious about user research, quantitative data comes into play.

What is the difference between qualitative and quantitative tests, then? The first one gives us valuable insights like "users find the navigation of the website confusing," and the second gives data in precise numbers, like "our redesign makes users do their tasks 61,5% faster than the old design".

The latter insight does not tell you what exactly makes a new design work faster than the old one and doesn't tell you how it can be further improved. However, when you have to justify redesign to the CEO, solid data would look more convincing than excerpts from user interviews.

The same metrics can be a good basis for assessing a design team's success and defining design KPIs. It helps with an old problem of UI/UX designers: when a good interface is barely noticeable. Few people understand how much work lies behind these seemingly "simple and obvious" solutions.

With the urge to make the changes slightly more visible, designers sometimes are tempted to make small but noticeable adjustments like switching colors, replacing buttons, and so on. These are the things that annoy users so much every time their favorite app changes. This is what happened to Twitter, by the way. We wrote about the scandal around the Twitter redesign recently.

How do metrics help with it? When designers know that their objective is to improve the metrics, they won't be just changing visuals and reshaping logos to make the results of their work more "noticeable." Their management knows the KPIs and can easily see the impact.

All in all, tracking usability metrics is a sign of a company with a certain level of UX maturity. Once you decide to invest in that, you'll find out that usability metrics can be as valuable as CAC, MRR, AARRR, and others.

To sum up

Usability metrics provide an invaluable tool for objectively evaluating and enhancing the user experience of a product. While qualitative insights offer rich, anecdotal evidence of a user's journey, quantifiable metrics deliver the concrete data necessary for informed decision-making and strategic planning. By integrating metrics like success score, error rate, task time, efficiency, SEQ, and SUS into the design process, product teams can pinpoint areas of improvement, validate design changes, and effectively communicate the value of UX initiatives to stakeholders.

Embracing these metrics is more than just a commitment to data-driven design; it's a reflection of a company's maturity in understanding and prioritizing user experience. By focusing on measurable outcomes, design teams can avoid the pitfalls of superficial changes and instead, drive meaningful improvements that resonate with users and contribute to the product's overall success. In an increasingly competitive market, harnessing the power of usability metrics is not just advisable — it's essential for any company seeking to deliver exceptional products that meet and exceed user expectations. Remember, in the world of SaaS, great design isn't just about how a product looks; it's about how well it works for the people who use it.

Curious to find out what is there beyond usability testing? Read our article about other crucial UX research methods

And if you need an experienced design partner to make sure all of your metrics are stellar, drop us a line


Masha Panchenko

Author

Table of contents

Don't want to miss anything?

Get weekly updates on the newest design stories, case studies and tips right in your mailbox.

Success!

Your email has been submitted successfully. Check your email for first article we’ve sent you.

Oops! Something went wrong while submitting the form.
Don't want to miss anything?

Get weekly updates on the newest design stories, case studies and tips right in your mailbox.

Success!

Your email has been submitted successfully. Check your email for first article we’ve sent you.

Oops! Something went wrong while submitting the form.