When I do usability testing, particularly with teams that have not been involved in the process before, I often find myself providing a mini lesson about what usability testing is (and is not!), how the process is going to work, and what they’re going to get out of it in the end.

Beyond explaining the process, however, I find that one of the most important things I need to do as the person leading the user research process is to set expectations appropriately. Rarely do things go exactly as planned, and by setting expectations, it helps both the team and me go with the flow, working with and adjusting to whatever unexpected challenges come along.

So, what kinds of expectations need to be set, particularly with one-on-one in-person usability testing?

Participant/Recruitment Issues

Participants don’t show up

Having no-show participants is both common and expected. Even when you set your incentive amount properly for your audience group and do reminder calls the day before, you should still expect a no-show rate in the range of 10-20%. The anticipation of no-shows can be handled in a few different ways during recruitment. Some stakeholders like recruiting floater participants who sit through several sessions and get paid extra. While I’ve done this occasionally, I usually prefer just recruiting extra slots. Worst case, if you end up with more participants than you expected, you can either run the additional participants or pay and then dismiss them. If you’re doing a focus group or other group-based research, you can still overrecruit and then pay/dismiss, often in consultation with stakeholders about who they think best rounds out their typical user types.

Participants aren’t right

While I do my best to work with clients and create a screener to get exactly who they think would be ideal representative users before typically handing off the screener to a recruitment firm, there are still misfires. Sometimes participants intentionally misrepresent themselves, but when using recruiting firms who have a lot of experience assessing validity, this issue can generally be avoided. More often I see participants come in who misunderstood the questions being asked during screening and ended up answering positively to attributes they don’t actually possess. When these participants show up, if they at least somewhat understand what to do, I’ll run them through some tasks to see if I can gather some useful data from them and also so that they don’t feel bad for being incorrectly recruited. If they are absolutely a misfit for the study and there is nothing I can do, I may talk with them briefly about their experiences to at least soften an abrupt dismissal, but then I will pay them and, as gently as I can, let them go.

Recruiters can’t find enough of the right participants

Although even seasoned participant recruiters may have a ton of experience with recruitment, every recruit and audience group is different – with some being significantly more difficult than others. I ask stakeholders to try to avoid requiring too many criteria for the recruit, instead focusing on only those criteria that matter most. Consider reminding your stakeholders that you don’t have to exactly match all the demographics of their target audience but rather need to find middle-of-the-road participants who can show you typical interface usage.

Script/Session Issues

The script won’t be perfect

While any scripted studies, such as usability testing or interviews, require some advanced planning and a script that can be used, I often find that the script does not end up working perfectly. Once I start using certain task wording or trying out certain language, I realize that text needs to be modified slightly to help participants properly understand what I’m asking. Other times as my clients observe, they may realize that the answers they are hearing aren’t quite getting at their key questions, and they will suggest changes or deeper probing questions.

As long as the research is more formative data gathering than a summative finality, a shift in script wording is often okay. However, this situation may offer further justification for over-recruiting if specific task data or responses from early participants can’t be used since a task or question was realized to be invalid.

A dry run is not always ideal

While stakeholders may occasionally request a dry run with, for example, a staff member as a mock participant to ostensibly make sure that the script works well and takes the right amount of time, I’m not a fan of this approach. I find that it’s hard to have a fully authentic and validly timed session when it’s not really with a representative user. Instead, I prefer to overrecruit with valid users. If there are some script-related points of confusion or timing issues found during the first session or two or even three, we can selectively throw out and revise faulty items but still include in our collected data the parts of the session that went okay. Consider a longer break between the first and second session or between the second and third to pause and assess how testing is going and introduce any necessary script changes.

The script may be longer than the allotted time

Most of my one-on-one studies allow for an hour of time for each participant. For the most part, an hour is the maximum amount of time that will be comfortable for most participants to be the center of attention without worry of fatigue. However, the schedule is set and recruiting is often initiated before the script is complete. For an hour session, I like to have an ideal script feel like it will only take 40 minutes. In reality, what feels like it should be 40 minutes could end up lasting a full hour for some participants. Sometimes I can tell that a script just feels too long and I’m fairly sure that at least some participants are going to run over. If this happens to you, tell stakeholders that you can have a high-priority portion of the script that always comes first and a low-priority portion that is run as time permits; alternatively, if all questions are equally important, to whatever extent possible plan to rotate out tasks so that each one gets a fairly equal amount of representation.

Location Issues

Locations are unique

No matter whether using high-end one-way glass, a hotel conference room, or somewhere in the field, each location is going to present different challenges. If you’re going to a location that is new for both you and the stakeholders, expect the unexpected—whether it’s parking issues, security desk challenges, windows without shades and bright sun, noisy HVAC, oddly located power, or tables that aren’t perfectly aligned with your vision of the session setup—be prepared to modify and adapt. If at all possible, leave lots of time for setup or even consider scoping it out the day before.

Internet glitches

Yes, it’s 2018 and we are supposed to have high-speed internet everywhere these days. But whether in a big city or a rural area, and even when you’ve done bandwidth speed tests or paid for speedier data, internet still glitches. It slows down sometimes, and other times it can just blip out for some unknown reason. I find that while this is still often at least a mild frustration, it’s rarely a show stopper. If this happens, don’t panic and wait patiently for it to resolve, making sure that you don’t end up keeping the participant for longer than the agreed upon time, even if it means skipping parts of the script.

Reporting

Pick a format, any format

Reporting can take on a variety of different forms, from just a discussion between iterations, to simple bullet points, to a slide deck, to a detailed Word report. Sometimes there are video clips of key findings, other times just the raw videos, and other times, no recordings at all. Sometimes there is some form of immediate debrief either verbally or with simple bullets and other times stakeholders will wait patiently for a few days to get the final report.

The important thing is to make sure to agree early on about the type of reporting that stakeholders want. As a freelancer, I feed this information into the overall budget as different reporting styles can take vastly different amounts of time.

Assess the reporting approach

While I have a default approach to reporting with slides or in text format, I find that it’s helpful to see stakeholder examples of prior reports. While sometimes the anticipated format of the report may seem like a reasonable match with the way that I’d collect data and report out, sometimes report expectations are unusual. The client may not realize that they have a unique approach to reporting, which makes the examples especially helpful. If you do find what seems like a unique approach to reporting, don’t hesitate to probe the stakeholders both before you create the script and after you collect the data so that you can generate what they want without too much headache.

Happy Stakeholders = Successful Research

Ultimately, keeping stakeholders happy provides a good frame for successful research. And keeping them happy relies on good communication and proper expectation setting. If you realize that you didn’t set expectations properly—which can happen no matter how much you try to avoid it—consider it a lesson learned for next time to improve you master list of discussion points with stakeholders.

Image: Digitalista / Bigstockphoto.com