9 Common Mistakes in Prototype Testing and How to Avoid Them

October 31, 2019

9 Common Mistakes in Prototype Testing and How to Avoid Them

This article will cover the major Dos and Don’ts of prototype testing. We’ll walk you through the most common mistakes we see in the field and share tips on how to avoid them. 

If you’ve ever conducted user testing on prototypes before, you know just how many moving pieces there are in the process. If you haven’t already, we recommend checking out our Step by Step Guide to Prototype Testing for a foundational understanding. It’s a comprehensive checklist and reference point we use ourselves before we do any user testing on prototypes. 

Reading over these resources can help make sure that your prototype testing runs smoothly and you get the insights you need faster and more easily.

Mistake #1: Not setting intentional goals 

If there’s nothing else that you take away from this article, be sure to avoid this #1 mistake.  Testing without goals is like embarking on a treasure hunt without a map. What are you working towards? How can you make sure you get to that golden pot of user insights? Setting goals will give your user tests a purpose instead of just being a standard procedure. Without intentional goals, you may be ticking off the testing box on your to do list, but you won’t be able to tell if what you’re doing has any value or solves user needs. 

What to do instead: Decide what your goal/research question is 

We’re fans of the reliable SMART goals framework. SMART goals are Specific, Measurable, Achievable, Relevant and Time-sensitive.

Let’s walk through a more concrete example of what this can look like with prototype testing by looking at a possible goal.

Goal: I want to uncover whether or not users can find the feature that they are looking for within the prototype. 

You are specifically trying to understand usability and navigation of your SaaS tool.

It’s measurable because in the end, the user either did or didn’t find it. 

  • You can ask questions to measure your prototype’s performance such as 
    • How difficult was it for you to find feature x?
    • What was difficult about finding this feature? 
    • What was easy about finding this feature? 
    • Did it take you more or less time than you expected to complete this task? 
    • From your experience with other tools, would you normally spend this amount of time on doing this? 
  • Ask questions about any other metrics you’re interested in tracking. Remember that these can be qualitative, not just quantitative. 

The goal is achievable because it has clear and realistic outcomes. 

You aren’t saying – I want every user to be able to find the feature. That would be setting yourself up for failure. Instead you’re trying to learn what’s working and where you can improve the user experience. 

Is it relevant? Well if you just finished redesigning your flow, then yes, this goal is totally relevant. 

But let’s say you just redesigned the landing page…product navigation might be a secondary goal to concept validation and figuring out whether or not the landing page is easy for visitors to quickly understand what your company does.

Make sure you have enough time to do user testing on this question. 

  • Is their room within your research scope and budget to allocate time for this?
  • Do you have extra time for additional goals?

Mistake #2: Testing a prototype that’s unfinished or too polished

As a general rule of thumb, testing prototypes at a lower fidelity stage is a smart move. Your mockups don’t need to be perfect in order for you to understand how users will react to the product. However, there’s a balance to strike. You don’t want something that’s so lo-fi it doesn’t resemble the final product closely enough. If it’s too hi-fi, you may be wasting time on perfecting one version instead of getting feedback as soon as possible. If it looks too polished, they might be more concerned about how direct their feedback can be if it appears someone has spent a lot of time on it. 

What to do instead: Test a few versions at the right fidelity

Put together a few rough versions at the same fidelity, not too low and not too high. Having only one prototype might make people less likely to give you more open and critical feedback that you need to actually push the product forward. Having a few versions can allow you to see which version best helps users to achieve their goals. If you don’t have time to let users test different versions, consider doing a side by side comparison of the different options you’re exploring to see which they find most intuitive. 

Mistake #3: Being underprepared

If users don’t have clear instructions that they can review before the test, they might feel lost before they’ve even begun interacting with your product. 

Instructions should include context on the prototype. Outline any upfront limitations that they might experience with this current iteration so that hiccups in the prototype feel expected and not distracting. 

Most importantly though, your instructions should include a list of tasks that you want users to perform during testing. 

What to do instead: Provide simple instructions.

Writing instructions doesn’t have to be some kind of long and tedious task. In fact, the shorter and more straightforward your instructions are, the better. 

In Don’t Make Me Think, one of Steve Krug’s guiding principles is to “omit needless  words”. He actually uses instructions as an example for when to do so. “When instructions are absolutely necessary, cut them back to the bare minimum.” Since you need some signposting and guidelines for user testing, this is a case where they are absolutely necessary but don’t have to drag on. 

All you need to say is something like: 

Please read the scenarios below and complete the related tasks. 
Answer the questions that follow. This will take you about x minutes. 
We appreciate your help to improve our product. 

Again, if need be, mention any details here about the prototype’s fidelity that the user should be aware of. 

Mistake #4: Assigning bad tasks 

As someone who interacts with the product on a daily basis, you’re probably used to all the acronyms and jargon and unique features that come with your product. However, your users are not. Bad tasks that ask users to try out specific features is like asking someone to taste ingredients from a recipe instead of making an actual meal. 

When straightforward tasks are provided that tell a user exactly what to do, you aren’t going to capture what it would be like for your user to interact with the product in their own day-to-day life. And what you really want is for the testing to feel as seamless and organic as possible. 

Similarly, avoid guiding your testers through specific user journeys or site flows that you have in mind. Sending your users down one journey might result in confirmation bias where the assumptions you’ve made about user flow can’t actually be challenged.

What to do instead: Create task scenarios 

Again, users are motivated by actual goals and you should be too. No matter what your company does, you have to step outside of the research box and imagine the real world scenarios where a user or customer will actually be looking for a product like yours. 

For example, in a case study we did with Belron, Customer Journey Improvement Manager Stephen Payne, explained what he calls grudge purchases. Belron sells windshields under the U.S. brand Safelite. Payne shared that most people don’t necessarily wake up thinking about a new windshield.You make grudge purchases only “if you have damage and need it fixed immediately.” So let’s adapt this for the context of prototype testing. 

Poor task:

Find information about our glass recycling program.

Their information on glass recycling is interesting and differentiates them as industry leaders, but not realistic to what a potential site visitor would do when first arriving on the site.

Okay task:

Book an appointment to have your windshield replaced. 

Telling users just to book an appointment is giving them too many clues and not allowing users to have a bit of roaming room. 

Best way: Task scenario

While you were driving to work this morning, you drove by a golf course 
and a stray ball flew at your window leaving a big crack. 

Find a way to get your windshield replaced. 

This scenario feels realistic and actually might include more than one task. 

Testers might read about other services offered, explore locations near them or even read reviews before scheduling the end appointment. In this way, scenarios allow you to learn more than simple tasks would. 

Mistake #5: Asking the wrong questions.

This all goes back to goal setting and tasks. 

Let’s go back to the treasure hunt comparison for prototype testing. If having a map is like having a goal and user insights are the treasure chest, then questions are your directions. Asking the right questions can guide you to the right information and user feedback that will be most constructive to making better design decisions. 

What to do instead: Write questions that will give you results. 

The solution here is simple, ask yourself if the questions you’re asking will give you more insight into your end research goal or if they can tell you more about how users were able to complete tasks. 

Mistake #5: Asking too many questions 

Okay, we know we just got on a soapbox about how questions are basically the rainbow that leads you to a pot of gold. But all good things in moderation, right?

If you ask too many questions, you run the risk of annoying your users and causing survey fatigue. Annoying your users can negatively bias their feedback against your mockup. Meanwhile, survey fatigue, or tiredness that comes with answering too many survey questions will result in lower quality data towards the end of testing. 

What to do instead: Choose your questions carefully 

Try 1-2 follow-up questions after each task with 3 questions at the end of the test. If you’re looking for concrete examples of what you can ask, review Step 4 of our guide to testing prototypes here

If you come up with a list of questions and find yourself with too many, just ask yourself if the question will provide you an answer that serves your ultimate research question and goal. If it doesn’t, eliminate it. 

Mistake #6: Recruiting too strictly! 

One of the most common reasons people skip over user research during the early phase is because they feel that they need to gain insights from their target persona only. On the other end of the spectrum, some teams feel that they need to test a statistically significant number of users for their results to count. 

Whether you’re going too narrow or aiming for too large a group of testers, you’re missing one of the key points of user testing. Get your product in the hands of real people for an outside perspective. 

What to do instead: ‘Recruit Loosely and Grade on a Curve’ 

UX thought leader and author Steve Krug writes that conducting user research with just about anybody can be very useful. As he puts it, we should all “recruit loosely and grade on a curve” when it comes to choosing user research participants. Krug’s point is that you can identify usability issues with pretty much any user research testers, not just those who may fit the profile of your ideal customer to a T.

Unless your tool is intended for such a niche group that only a very specific subset of people can use or understand it, you really don’t need to obsess over recruiting only folks who fit neatly into your target audience.

On that note, don’t fret too much over having a large number of user research participants either because you can still get very valuable insights from just a few participants. With qualitative user research you aren’t confined by need for a representative sample. In fact, if you’re interested in user testing, Jakob Nielsen, argues that “the best results come from testing no more than 5 users and running as many small tests as you can afford.” Removing the statistical barriers that come with quantitative research lets you conduct user research more frequently and stretch your research budget even further.

Mistake #7: Not having a system to collect feedback 

Whether your test is moderated, unmoderated, remote or in-person, you need to have the tools that will help you collect feedback ready to go on the day of testing. Without a system for gathering feedback, the valuable time you just spent on testing sessions will go wasted. If your feedback is disorganized, it will take longer to sort. If it’s not accurately captured, and instead relies on the moderator’s memory in some way, it will be prone to so many errors as to become unusable. 

What to do instead: Pick the method that works for you. 

The way you collect feedback will mostly depend on the type of prototype you’re using. 

If you’re testing with paper prototypes, then a paper survey might be the best route. Remember though that paper surveys will require extra time later for manual data entry and response coding.

If you’re testing with digital prototypes, you might consider a usability testing tool like lookback.io or in-app testing tools like Qualaroo which lets you collect responses right within mockups.

Mistakes #8-9: Forgetting best practices on the day of testing

#8: Testing without consent to collect information

If you will be collecting any type of personal information and plan to use testers’ responses or record sessions for further analysis, then you should be getting written and signed consent first. If there’s any possibility that your testers are located in the EU, you’ll have to be mindful of GDPR. In California, there’s the CCPA to take note of. 

#9: Not using neutral language with participants 

The language that you use to describe your prototype needs to be neutral when you’re describing it. But if we’re being honest you probably shouldn’t be describing it at all. It’s the old show-don’t tell rule. Let them discover the product themselves rather than telling them about every feature. 

Additionally, if you’re a designer and you’re participating in the research process, don’t let users know that you made the prototype, even if you did. The goal is to eliminate as much room as possible for bias, especially since you may be testing with a smaller size of respondents.  

What to do instead: Stay calm and keep a checklist handy 

These last two mistakes are completely avoidable with the use of a handy checklist such as this one. There are a lot of moving parts when it comes to prototype testing, so be sure to stay organized. 

Conclusion

Once you’ve finished prototyping, remember that feedback is a gift. If you don’t implement it or assume that the user test was a fluke and you and your team still know best…you’ll miss out on the valuable impact that user insights can offer your product. After you’ve worked so hard to get to that treasured pot of gold, don’t you want to enjoy the wealth of user insights you’ve gained anyway? 

If you take things at a steady pace, you’ll be golden and will easily avoid common beginner’s mistakes like the ones we’ve listed. 

The Qualaroo team has years of collective experience in user feedback and can help you capture insights that matter – at scale.

Get in touch with an expert