Usability tests are one of the most important steps in creating successful digital products. For UX researchers, this means selecting testing methods and creating tasks or questions carefully. The first part of the process is to determine whether tests should be moderated or unmoderated. Every team should consider their digital product, users, resources, goals, and limitations when choosing these features.
Suppose your team selects moderated usability tests. The next step is to decide whether they would prefer in-person or remote usability testing. Fortunately, there are numerous tests under each combination. Find out about the best moderated usability tests below.
The Best Moderated, In-Person Usability Tests
Moderated, in-person usability tests offer many benefits for teams, namely controlled environments and detailed user response information. These types of tests require a great amount of resources and dedicated personnel.
1. Lab Usability Tests
Teams run lab usability tests in a special environment (lab) where users use particular equipment and receive guidance from a moderator. Lab tests may be necessary in certain circumstances, such as for games that reflect human movements or applications that analyze user features. In other scenarios, they may be preferable due to greater quality user data. Moderators and observers can record the users as they interact with the digital product, taking note of behavior (such as facial expressions) that otherwise would remain unknown. Furthermore, professionals can follow up on certain responses, gathering more in-depth data; this can be key to making outstanding architecture, design, or content choices down the road to fit with the target audience.
While lab usability tests may sound ideal, they may not be right for every team. Typically, lab usuality tests require the most resources including individual equipment and accessories, secure connections, and storage backup options. Additionally, it may be difficult to find users who wish to visit in-person facilities for a short test. Often, teams must offer large incentives to the participants.
2. Guerrilla Tests or Hallway Usability Tests
Guerrilla tests involve selecting people at random and running a quick usability test with dedicated equipment and guidance from a moderator. Typically, teams will go to busy public places and ask various people if they would like to participate in a 5-15 minute test for an incentive. Guerrilla testing is an excellent way to test specific functions or features of a digital product, rather than the whole product. Moderators and observers can receive instant feedback about the user’s first impression, emotional response, and overall opinion.
Hallway usability tests are less expensive than lab usability tests, however, still require resources for equipment and incentives. Additionally, they have a few limitations or disadvantages. They do not work for complete digital products, nor do they work for complex or niche products. Since the participants are selected at random, tests are not run on the target audience; data may be very different from the data teams would collect from their real users. Consequently, teams may wish to perform other types of usability tests if they are assessing performance.
The Best Moderated, Remote Usability Tests
Moderated, remote usability tests help teams garner higher amounts of participants, save money, and spend less time generating and compiling relevant data. Since users can access moderated tests from anywhere, these types of tests are more convenient for both parties. However, teams must grapple with potential technical issues and user distractions.
1. Phone Interviews
Teams call participants, guiding them through the tasks remotely, and recording user responses or questions. The testing device also records user movements. Together, these two data sources create a valuable resource for teams. Typically, this type of usability test is selected when digital products are complex or unfinished. requiring guidance on which areas of the product should be tested. Likewise, it can be used when teams require more in-depth details about user responses or actions.
A phone interview is one of the tried-and-true methods of retrieving user data and feedback.
It can be a cheaper way to reach target audiences, as participants only need their own device and a call feature or a cell phone. Additionally, teams can reach users around the world without requiring physical locations.
However, moderators and observers cannot see the participant’s environment, physical movement, or facial expressions. Since this is the case, users may be dealing with distractions, may not verbally utter every feeling or impression aloud, or may speak differently than how they truly feel. Moderators are also limited in their capacities to guide or help their participants, as they cannot use visual aids. As such, personnel must be trained extensively prior to running the usability tests (in communication and clarity).
2. Card Sorting
When teams employ card sorting, they provide remote participants with access to their digital product and they offer various concepts on cards. Test participants must sort these cards into groups and categories, which may be pre-determined or user-inspired. Upon completion, moderators must inquire into their sorting logic or reasoning. In this way, teams develop deep insights into how a digital product should be structured or organized.
Card sorting is a popular moderated, remote usability testing method. It can be particularly helpful when teams are creating or optimizing their information architecture (IA) or prioritizing content in their digital product. It even helps architects understand how to label certain features or functions.
Card sorting can also be unmoderated, however, teams will not develop an understanding of why users think, prefer, or prioritize in certain ways. This can lead to better decision-making for other digital product features or functions.
Although card sorting can be useful, it is also important for teams to know they may have to run other usability tests later in development. In other words, they must gather feedback from real users once the choices have been implemented properly. Furthermore, since the usability rest is remote, participants may complete the first part of the test; then, they may forget or skip the second part. It may be due to confusion, refusal, or disinterest. Accordingly, moderators must indicate the importance of completing the full evaluation. Again, incentives may play a major role in this.