Step by Step: Testing Your Prototype

September 5, 2019

Testing prototypes is an inherent part of finalizing designs. Nobody wants to wonder why users are not utilizing an app the way it should be utilized or why they can’t seem to complete a purchase on your website. And nobody wants to rework something that’s already been shipped. 

Conducting tests as early as the prototyping stage can help you avoid these unfortunate scenarios. User research with Qualaroo can help you validate your current design and uncover new areas of focus for the next iteration. Use Qualaroo to understand how people use your interface early on in the design process and evolve your prototype into a working product faster.

If you’re reading this, you probably already understand the value of testing prototypes, but for those who are just starting their adventure with UX, let’s review the benefits. 

One of the main reasons to create prototypes, for digital products but also across all industries, is to test and validate your designs. With prototypes, designers and their companies want to see if their design makes sense at all – whether or not it works if people can use it, and whether or not they even care to use it.

Prototypes rarely exist as a single iteration. They’re typically a series of versions that allow for comparison and ultimately choosing the best solution. Hence, there are prototypes of cars presented to the public, sample collections in the fashion industry, floor plans put together by interior designers used to visualize if all the elements will fit in the space, architectural prototypes used to test materials or airflows in ventilation, and industrial designers produce prototypes to achieve the most ergonomic forms for their products. 

Prototypes are not only created for designers to test but for actual users to test as well. It’s important to get input from those who are not immersed in the design process and don’t know how the designed object is supposed to work. This outsider’s perspective is often the most useful as it’s unclouded by a concept of how the design should behave. You can achieve these types of insights even when testing with participants that do not have domain expertise and are not necessarily your target user. Testing early on in the design process include saving both time and money in the process of creating and releasing a product. 

And last but not least, prototypes are a means of communicating with stakeholders and the teams that will execute on the final product. No matter how hard we try, describing a design with words will never be as effective as sharing a prototype. 

Prototype Testing Step-by-Step

There are a number of prototyping methods that can produce different results. For the purposes of this article, we’ll focus on paper and digital prototypes for software and websites.

When it comes to testing prototypes there are a few basic rules. The way you go about testing the prototype depends on a couple of things:

  • The type of prototype you have (sketch, storyboard, physical prototypes, paper interface, digital prototype, etc.) will impact how users will interact with it; 
  • Your testing goals are important and will help you develop the testing scenarios and questions to ask; 
  • Time constraints, or in other words, by when you need to have results will also inform the best method of prototyping for you. 

Step 1: Create the prototype.

Your prototype doesn’t need to be perfect or even particularly detailed. You can collect user insights on anything, even rough sketches. Of course, the more detailed and interactive your prototype is, the more you’ll have to test. Sketches are great for verifying whether or not a user can discern the purpose of your application or website and what the user can do with it. Interactive, hi-fi prototypes that more closely resemble the final product will allow for more testing and may also be easier for users to understand. However, remember you are testing prototypes, in part, to avoid putting too much work in developing a solution that will ultimately not be usable. You don’t want to put too much time and effort in what’s “only” the prototype of that solution. It’s more useful to have 2-3 rough prototypes to test than have 1 pixel-perfect prototype. Not only you can test a couple of designs but usually people are more likely to be open with their critical remarks if they see a couple of versions, rather than criticizing one design as it may seem like the ‘only’ solution. 

If the prototype is digital and interactive it will also require less face-time with users. This is because paper prototypes typically require short interviews with every participant in order to collect useful insights. With digital prototypes, you can also automate the process of collecting insights using tools like Qualaroo that enable you to ask users questions during testing or after.

It’s also advisable to put real data into your prototype. You don’t need to worry about using finalized pictures and copy, but it is a good idea to use this as an opportunity to test your microcopy. If there is any user data visible in the interface prototype you’re testing make sure it sounds real. If you want to indicate the user is logged in and the name on the user profile would be visible in that case instead of coming up with Obi-Wan Kenobi, just put John Smith. Obi-Wan Kenobi may be funny but will probably just distract your test participant. 

Step 2: Decide what you want to test.

This step is all about what you want to validate or verify. 

There are a number of things you can test on a prototype, but prototype testing isn’t perfect for everything.

Things you can and should test on the prototype: 

  • Concept validation – this is not about testing whether you should be building the solution you’re working on at all, but rather verifying if people can quickly figure out what they are looking at and what it does. These tests are most commonly used for home page prototypes but you can also test product pages in e-com or dashboards in online tools.
  • Navigation – navigation can be easily tested in prototypes on the condition that you are using finalized labels and category names. It can help you discover things like whether or not your search field and menu are where people expect to find them and if their naming makes sense. Ultimately you want to test whether or not people find what they think they’re going to find based on the categories you present. 
  • Flow of specific features/ functionalities – prototypes are great for testing if you get the steps for the user accomplishing the task in your design right, as well as their order. Does your product flow smoothly or is it making your user feel confused?
  • Microcopy – as mentioned above, try to put real labels, menu categories, button names and short descriptions on your prototype. Firstly, this will verify whether or not people understand what they’re seeing at first glance (concept validation), but it will also show you if there is anything confusing in the microcopy that should be changed. 

Things you shouldn’t test on prototypes:

  • Graphic design – prototypes are just schematic representations of your final product, they are not colorful and they don’t have all the visual elements, therefore testing “look & feel” on prototypes is not possible.
  • Content – prototypes are not filled with the final content, and are not the best way to check if your content will resonate with your target audience. To check this, send your content to a couple of people from your target group or display questions in your working product or blog.
  • Volume testing – the idea behind testing the prototypes is to collect quality feedback on the functional design so that you can iterate and eliminate the largest issues that would hinder your user’s ability to complete their tasks in your product. Prototypes are not intended to help you collect volumes of data. Technically it’s possible, but insights from a couple of participants will deliver enough insights to make the next version better. Testing at volume is also difficult with prototypes because you would need to recruit large amounts of users to send the prototype link to and this can be a major obstacle. Prototype testing is about gathering actionable feedback fast, not collecting as much feedback as you can. 

Form research questions. Research questions are not questions that you ask users during or after the test. These are questions you are trying to find answers to by asking users to carry out different scenarios with your prototype. Research questions indicate what exactly you are trying to find out about your prototype or product. They should be composed carefully as they will set the direction of your test and determine what the scenarios and tasks for the test will look like. They should not be too general. Keep in mind that based on the outcome of the test you will want to make some design decisions. Research questions can also be formed as goals. 

Example of a bad research question/goal: 

  • I want to test my prototype.

Example of a better research question/goal: 

  • I want to test my navigation

Example of good research questions/ goals: 

  • I want to check if users will be able to find the information they’re looking for in my prototype. 
  • I  want to check if users will be able to find the product they are looking for in the prototype.
  • I want to check which version of the prototype seems easier for users when it comes to finding a specific product on it. 

Depending on how much time you have for testing and what the scope is, you should have 1-5 research questions developed. This doesn’t mean you cannot observe other aspects of your design being tested. In fact, every time you conduct user research, there almost always will be plenty of other learnings apart from what you were directly testing. You should still always have 1-5 core aspects you want to test/analyze.  

Step 3: Prepare your test scenarios or tasks.

Usability testing is never about just showing your prototype or website to the user and observing what they do with it. It’s about giving users a specific task to perform that’s linked to the problem your product or website is aiming to solve. Tasks (or scenarios) have a form of small narratives, they’re typically brief but still give your test participants some context.  

This is the moment when you use your research questions to compose your tasks. Your research questions will tell you what the tasks should be about. Remember the tasks should focus on the goals of your users not the functionalities and features of your product.

 The best example illustrating the difference is a usability study conducted by Jared Spool and his team for Ikea, years ago. The test explored how people found products on Ikea’s website. The initial task was: “Find a bookcase”, later it was changed to: “You have 200+ books in your fiction collection, currently in boxes strewn around your living room. Find a way to organize them.”  

The way the task is formulated influences the results. In this case, users following the first task usually typed “bookcase” in the search field. Users in the second scenario were usually browsing through product categories and searching for any products that would be suitable for storing books, not necessarily products named “bookcase”. In the end, the problem a user tries to solve here is finding furniture to put their books on, whether or not that’s a bookcase. 

This is particularly important when designing products or websites that use very specific language. Try to avoid words that would be leading and make your users accomplish the task faster or in a different way than they would normally. Try to not give clues in general. 

The best approach is to make the task resemble real life. If you are asking your user to book a flight via your app/ prototype don’t say just “Book a flight from Seattle to Amsterdam”. A better scenario would be: “You want to visit your friend in Amsterdam in September. You booked two weeks off at work. You realize it is an expensive flight but you would like to spend as little as possible. In addition to this, due to your recent back problems you are considering upgrading your flight class.”

Another thing to keep in mind is that task narratives are not the place to explain your product or website or where you sell it. The whole idea of testing is to verify whether or not people will be able to use it on their own, without anyone explaining anything to them prior and without anyone persuading them they should be using it. What’s more, especially if you are doing a face-to-face test, users may be reluctant to be honest with any criticism of the prototype if they see you are so attached to it – they will not want to hurt your feelings. 

Finally, remember to not develop tasks that are impossible to be completed. If your prototype doesn’t include the feature, flow or elements you want to test, you cannot test it.

Step 4: Question time!

Once you have tasks for your participants, the only thing you’re missing is questions you should be asking during or after the test.

For a comprehensive guide on how to ask questions click here

Below you will find a list of questions we suggest using.   


To validate if the design is communicating well what the product or website is for at first glance. 

  • What do you think this tool/ website is for?
  • What do you think you can do on this website/ in this app?
  • When would you use it? 
  • Who do you think this is this for?
  • Is there anything it resembles? If yes, what? 
  • What doesn’t make sense here?

Even though your participants are testing the prototype, call it what it’s supposed to be: website, system, tool, product, application. The more real it feels, the better. 

Questions to be asked after each task the participant performed.

  • Was there anything that surprised you? If yes, what?
  • Was there anything you expected to find that was not there?
  • What was difficult or weird about this task?
  • What was easy about this task?
  • Did you find everything you were looking for?
  • What didn’t look the way you expected?
  • What was missing, if anything?
  • What was unnecessary, if anything?
  • Was anything out of place?
  • If you had a magic wand, what would you change?
  • How would you rate the difficulty level of this task?
  • Did it take you more or less time than you expected to complete this task? Would you normally spend this amount of time on doing this? 

Task-specific questions.

These questions depend solely on what the task was. Some examples would be:

  • How did you recognize that the product was on sale?
  • What information about shipping was missing? 
  • What were the accepted payments methods?
  • How did you know the plan you picked was the right one for you?
  • Do you think booking a flight on this website was easier or more difficult than on other websites you have used in the past?
  • Did sending money via this app feel safe?
  • Do you think data gathered by this app is reliable?

At the end of a test.

  • Try to list the features our tool has. –  This question allows you to see what stood out the most to the user. Users never use all the functionalities of a product or website. Especially with our current tendency to multiply them in our tools, instead of minimizing their number. This question also may indicate what features users ignored or may have simply not noticed in the design. 
  • Do you feel this application/tool/website is easy to use?
  • What would you change in this application/website?
  • How would you improve this tool/website/service?

There is also a wonderful design thinking method called “I Like, I Wish, What If”. In this  approach, you ask your research participants to finish 3 sentences starting with: “I Like…”, “I Wish…” and “What If…”. This encourages users to share both their positive feedback and criticism simply because the critical feedback is expressed in a non-negative way. On top of that “What if…” is a great way to collect ideas from the users that your design team didn’t come up with. These questions can be asked after each task or at the end of the test. 

We suggest not using more than 4-5 questions in a sequence (after each task) since this can disturb the flow of the test and can cause fatigue. If the test is face-to-face you will probably be asking additional questions that will pop up during observation to probe the difficulties participants encountered. If this is a remote test with a tool like Qualaroo, more than 4-5 questions will result in a lower response rate, and since you would be only testing your prototypes on a couple of users, the response rate is important. 

Step 5: Time to recruit research participants. 

You can find tips for recruiting user research participants with Qualaroo here.

Step 6: Start testing!

Here are some final things to keep in mind that will help your test to be efficient and useful.

  • If you are collecting any personal information about your test participant you should get their consent first. This also applies when you are recording them while they test your prototype. You should be particularly careful when conducting the study in European Union, where GDPR would apply. However, please mind that outside the EU more and more countries (and some states in the USA) are introducing similar regulations. Make sure you not only get the consent to collect information on participants and/or record them but also inform them these will only serve to prepare conclusions and a summary of the study and that they will only be used internally and not published anywhere. 
  • Make it very clear that the usability test (or UX/ user/ prototype test/ study/ research) is not about testing the user (them) but testing the functional design. This means participants can only help us verify whether the prototype is good or not. Be sure to tell your participants they can’t be wrong. 
  • Tell your participants you did not build the prototype. If they think you are the one who developed the prototype they will refrain from critical remarks to not hurt your feelings. To encourage honest feedback, be open and engaged. If this is face-to-face study, don’t defend the prototype and design solutions in it. Be neutral and try to avoid emotionally-loaded words whenever you are describing the prototype or its elements. 

Step 7: What to do after testing.

The whole point of making a test like that is to find out what to change in your interface, how to make it better and more usable for people. The beauty of UX research is that it always works. You will end up with a list of things to fix. Quite likely, you will have more items on your list than you can focus on given limited time and deadlines we all have. However, remember you don’t have to fix everything. Try to figure out which usability problems are critical for the user. If you cannot make a decision, invite a person or a few people to a good old debriefing session. Share the results you have with them and try to pick their brain. You don’t have to invite people from outside of your company, and people you invite do not have to be UX experts. 

Be realistic about how much you can fix before the next round of testing or before handing in your designs to the dev team. 

Pro tips

Moderated or not? Remote or not?

Moderated test means there is a person facilitating the test with participants. This can be done face-to-face when both people sit in the same room during the test, or remotely, when both participant and moderator are connected via conferencing software, phone, chat or a combination of these channels. Apart from that, all the other elements are the same. Users would be testing prototypes or a live website or products and they would be trying to complete tasks prepared in advance by the researcher. 

In an unmoderated test, participants get instructions with a list of tasks, but there is no-one assisting them so they cannot ask questions during the test. Unmoderated research is somewhat more popular for remote testing. 

You may be wondering if testing your prototype should be moderated or not. Both approaches have pros and cons. If the session is moderated, you have more control over what is happening and you may decide during the session to skip some tasks or help your participant when they get stuck for too long. Keep in mind that helping participants is not advised because if some step in the flow is too difficult we want to see what they would do in such a situation. You should always care about the participants’ wellbeing so if you notice they are struggling and not really trying to solve the problem anymore and just getting stressed, just ask them to move on to the next task).

Non-moderated sessions can be difficult to intervene in and typically don’t allow you to ask follow-up or probing questions. However, even with these sessions, it’s possible to introduce changes before all the participants complete the test if you are not sending link to all participants at once. Simply change the tasks or questions before sending the link to the next participant. 

Unmoderated tests are quite effective and still allow you to learn about a lot of usability issues while saving time. These solutions are more scalable and also helpful with remote tests with users in different time zones. 

Test your test!

You may want to pilot test your test, especially if this is a new domain for you. A trial test will help you quickly pick up anything you forgot about and will help with understanding if the participants will easily pick up the tasks you prepared for them. This doesn’t require too much preparation or recruiting anyone specific. Just ask anyone from your company to sit in front of the screen for a moment and read the tasks. If something is not clear you will know immediately because they will be confused and will start asking questions right at the beginning. This is very useful especially when you are trying hard to choose the right words to describe the tasks and avoid leading or emotionally loaded words because sometimes you end up with a version that is too tricky to be understood by the average person.

Remember, user research is not supposed to be scientific. What you are trying to get is qualitative data, insights and an understanding of how people use your interface and what issues they are encountering with it. Not every test needs to be the same, you don’t need to stick to a scenario just for the sake of it. A common-sense approach is best. 

The Qualaroo team has years of collective experience in user feedback and can help you capture insights that matter – at scale.

Get in touch with an expert