You probably have a firm grasp on some of the universal metrics of SaaS success: ARR, growth rate, churn rate, CAC, LTV, etc. There is no doubt that these are critical, but in many ways these metrics do not tell the whole story of “success”. So what’s the leading indicator that can give you a fuller picture of success? Your customers’ satisfaction.
Is The Number of Clicks Really that Important?: Debunking the 3-Click RuleJanuary 18, 2019
Like any field, there are ideas and best practices in web design and experience that have become central tenets. While these “laws” are meant to improve web usability and experience, they’re certainly not immutable.
One web usability idea that seems to persist is the three-click rule: or the idea that it should take no more than three clicks for a visitor to reach their desired piece of content. This idea is credited to UX thought leader, Jakob Nielson.
Today we’re going to debunk the 3-click rule and demonstrate why this rule needs a refresh.
A product of its environment: The origin of the 3-Click Rule
The 3-click rule may not be as relevant as it once was, but that doesn’t mean it never had any merit. Think back to 20 years ago and the advent of the internet: data was more expensive (and therefore less accessible), performance was much less reliable, and user confidence in websites was much lower. In this type of environment, every click mattered, and the longer it took for web visitors to accomplish a task meant the more money companies were losing. So, when it was created, the three click rule was not just a proxy for usability but also an important consideration for operational costs.
But things are different now…
While the 3-click rule made sense when it was created, it’s now a little dated to make sense at face value. The internet has changed and with it, so have the rules of what makes a site truly usable. So, while this rule may have good intentions, the assertion that clicks correlate directly to user confidence just doesn’t quite ring true like it used to.
A study shared on UIE mapped satisfaction against clickstreams and indicated that “fewer clicks do not make more satisfied users.” In streams ranging from 3-24 clicks, there was little difference in reported satisfaction levels. See the graph below.
While clicks may seem like a good proxy for usability, they often don’t directly correlate with user satisfaction. Maintaining three clicks as a hard and fast rule isn’t worth a trade-off of user satisfaction.
Another argument for the 3-click rule is that it caps the amount of energy you are asking of a user. You could say that clicks require users to exert some energy, but put in perspective, a click is not a major task. As Susan Weinschenk points out in 100 Things Every Designer Needs to Know, “you use up more [brain power] by asking [users] to think or remember to do a mental calculation (cognitive), than when you ask them to look at something on a screen (visual).” While there is some effort required in clicking, it’s certainly not the most taxing activity you can ask of your users.
We’re inclined to agree with a new take on the 3-click rule from web expert Chas Grundy:
“A more flexible approach to the classic rule is my 1 Click Rule: Every click or interaction should take the user closer to their goal while eliminating as much of the non-destination as possible.”
Having a clear path forward and consistently demonstrating progress can have a positive impact on usability, regardless of the number of clicks.
Don’t throw your clicks out the window
We’re not saying that you should completely ignore the number of clicks that it takes to accomplish a task or inflate that number unnecessarily. Rather, we maintain that a better metric to focus on is user confidence and satisfaction.
If people know they can get what they need and have a sense of progress toward that goal, they will click however many times is necessary. In fact, sometimes it’s better to have more clicks to segment a task or set of content so as not to overwhelm your audience.
For example, we divide Qualaroo nudges into multiple screens because that makes it easier for respondents to focus on the question at hand as opposed to getting overwhelmed by a long webpage full of questions. Even though users have to click many times, they are confident that they are getting closer to their goal.
One of our favorite ways to instill confidence for users is adding a progress bar (either visually or numerically) to help visitors understand where they are in their process. This communicates that there may be more to do but also that they are getting closer to their goal.
User confidence is really at the heart of what we should all strive for. In fact, we’d argue that more clicks can be a good thing in some scenarios. For example, when you’re making a purchase on a new site, you’d probably feel more confident if you had a review screen to confirm all of the details of your purchase and payment information. If an additional screen or confirmation page can prevent a user from making an error, it may be for the best. More clicks aren’t always a bad thing, especially when they inspire trust.
Some exceptions to the “rule”
In some scenarios, fewer clicks really are better. Specifically, additional clicks that add no value to your user’s experience or are particularly repetitive should definitely be avoided.
One example we love in which the pursuit of fewer clicks makes a lot of sense is with Amazon 1-click. If you’re not familiar with this offering, it streamlines your purchasing process by auto-saving your payment and shipping information and allowing you to purchase in…one click.
Image via Amazon.com
1-click works because it eliminates the need for users to enter the same information over and over. There is really no value in a user repeatedly entering and verifying the same shipment and payment information over and over. This is great for user experience, as frequent Amazon shoppers don’t necessarily get anything out of repeating the same process over and over, but it’s also good for Amazon as it reduces barriers to purchasing.
Unnecessary steps that add no value to the user experience should be avoided as they only make it more difficult for visitors to achieve their goals.
We wish that usability was as simple as a finite number of clicks equaling a positive user experience. However, the rules just aren’t that simple. We recommend keeping track of user satisfaction and conducting user testing regularly as opposed to adhering to a particular number of clicks.
We grouped our list of 29 questions into the different topics you should consider exploring in prototype testing. Aim to choose at least one from each section to make sure all your bases are covered. We’ve also included a few pro-tips here as food for thought.
This article will cover the major Dos and Don’ts of prototype testing. We’ll walk you through the most common mistakes we see in the field and share tips on how to avoid them.
Testing prototypes is an inherent part of finalizing designs. Nobody wants to wonder why users are not utilizing an app the way it should be utilized or why they can’t seem to complete a purchase on your website. And nobody wants to rework something that’s already been shipped.
UX designers are under a lot of pressure to produce designs that add value to users’ lives. But without input from your users, it’s nearly impossible to design an experience that actually helps alleviate their pain points. If you’re pressed for time and/or don’t have the help of a researcher, getting the user input essential to design a great product can certainly be a challenge.
As a UX designer, getting your leadership to support your major projects can be as much about talking the talk as it is about walking the walk. As much value as your work may provide, you also have to know how to sell it in a world of competing priorities and looming deadlines.
Even as UX design and user research are becoming a more prominent focus in today’s leading companies, it can still be tough to get executive leadership onboard with user research-related initiatives. We know the struggle.
This post originally appeared on UsabilityGeek.