I largely do user experience (UX) research activities such as usability testing, cognitive walkthroughs, ethnography, interviews and focus groups all centered around how users and potential users would interact with existing, updated and new interfaces. Sessions and activities are typically scheduled to last about an hour for each participant.

Like my marketing research colleagues, I often rely on recruitment organizations to find me participants for these studies. I choose a location and develop a screener that is intended to assure the recruitment facility will find exactly the sample that I need to conduct my research.

I’ve used local recruitment organizations across the United States as I conduct studies all over the country, and by and large, recruitment generally goes very well. These organizations designate a lead recruiter who talks with me a bit to make sure that what I’m looking for is clear, and also that what I’m looking for is reasonable. Then at least a few days before the study starts, I get a spreadsheet of recruited participants, which I can look over to confirm that everything looks as expected.

The recruiter misunderstood!

Recently, I was conducting a 3-day/15-person usability study in a new location and, from looking at the spreadsheet one week prior, I was confident that the sampling of participants recruited was what I had been looking for and what I had defined in the screener. Then the first participant walked in.

I was disappointed to find out that the participant wasn’t actually a match for what I needed, and in fact, didn’t understand the tasks that I was asking her to do. A periodic mismatch is par for the course, however, and I had prepared for this eventuality by letting the client know that not only could there be no-shows, but not every participant would be perfect.

When the second participant walked in and wasn’t a proper representative user either, I began to grow concerned. Even though having two faulty recruits was still within the overall realm of a normal study, both coming right at the beginning of the study likely signaled a greater problem, which I soon realized was in fact true: The majority of participants recruited were not actually the representative users needed.

Should this ever happen to you when managing or conducting a user research study, here are my lessons learned from this experience.

#1 Maintain good will

Even though the situation was looking grim, I stayed positive and so did my team and my clients who were observing all the sessions. The recruiter stayed positive through it all as well. My main client stakeholder and I worked together with the recruiter to help pinpoint where the misunderstanding was and what kinds of people we really needed.

Takeaway: Don’t get angry, and don’t give up hope that your study can still be a success. Your best bet at salvaging the project in a situation like this is to work with the recruiter in as good a mood as you can muster.

#2 Rescreen and move sessions forward to allow more breathing room

As soon as we were sure that the recruiter understood what we were really looking for, we then had her go off and rescreen the rest of the scheduled participants; this is when she discovered how off the recruit really was. While a few of the participants were too near to coming in to cancel on and another two couldn’t be reached, we ultimately discovered that only about a quarter of the scheduled participants were legitimately valid enough to understand the study tasks that would be asked of them. The remainder were not going to be of much use to the study.

We asked the recruiter to attempt to move up the sessions of those participants that were valid to allow for more breathing room in backfilling the invalid participant spots.

Takeaway: Pushing to move up as many valid participant sessions as possible helped save the study. While timing didn’t align perfectly, the recruiter was able to move up enough of the sessions to successfully replace those latter sessions with additional valid participants.

#3 Remain flexible

While my team had planned for a comfortable load of 5 sessions per day over 3 days, when we realized that it was going to be hard to get the desired numbers, we told the recruiter to feel free to start sessions earlier, schedule sessions during lunch time and even later in the day as needed to get the people we needed to get. The recruiter did schedule a long third day to make this happen. We had also offered to be around a fourth day if needed, even though it involved changing travel plans. However, in the end, we managed to get enough out of the planned 3 days.

Takeaway: Remain flexible. Figure out what you’re capable of doing both within existing time constraints and within the budget. In this case, the project budget was not impacted. While there were additional session times scheduled, less participants meant less analysis time so the overall time spent on the project remained roughly as expected.

#4 Plan for extra sessions from the outset

While we did have two separate audience groups for this study, the tasks themselves were very similar (for the most part just one extra click was needed for the same activities with the second audience group). We allowed for 15 sessions with the assumption that there would be some no-shows. While 15 would have given us a robust report, we ended up with 10 valid sessions, enough to still pull off a completely valid report.

Takeaway: Whether extra sessions, or floater participants that get paid to sit through 2 or 3 sessions, or planned makeup days, allow some options to keep numbers at a desired level if problems should occur.

#5 Always pay participants

One of the stakeholders asked me if participants get paid when they are not recruited correctly.

Takeaway: I told the stakeholder that the answer here is always affirmative. While it is best to cancel participants before they come in (generally with at least a day’s notice), if they show up as originally scheduled they should get paid. Unless there is solid knowledge that they were doing something dishonest, the going assumption should be that it wasn’t their fault that they ended up being recruited incorrectly.

#6 Be honest but gentle with participants

For the first two invalid participants, I ended up probing a bit to determine if, in fact, they weren’t right and didn’t understand the tasks. The probing itself cued them into the fact that something wasn’t quite right. One participant fully knew what she didn’t know and said as she was leaving, “Their recruiting was a bit off, wasn’t it?” By later that first day, however, I knew how to best identify valid from invalid with a few selected questions worked cleanly into the script, and when a participant was invalid, I did my best to make it feel like it was natural that the session didn’t last anywhere near the expected hour.

Takeaway: Don’t lie to participants, but simultaneously be gentle, and do your best not to let on that the participants were not what you were expecting. Sure the cash is a fair incentive itself, but people want to be helpful and don’t want to end up leaving and feeling like they did a bad job.

#7 After the test plan is written, review the tasks against the screener

This morning I was drafting a screener and a test plan for a new study. After I drafted the task wording, I then looked again at the screener and tried to evaluate whether the screener itself would provide a foolproof basis for finding people who would fully understand the tasks that needed to be completed on the website being tested. I then ended up adding a new set of bullets to the screener to query potential recruits as to which of the following activities they do regularly. Without giving away too much to future participants, those activities roughly mapped to the actual tasks they’d be doing in the study.

Takeaway: While a screener often needs to be developed quickly to get the recruiting ball rolling, consider doing your best to scope out specific tasks before the screener is finalized. Then go back to the screener and double-check the alignment between the screener and those tasks.

#8 Create a lessons learned document

I admit that while I started writing this lessons learned post on the flight home from the last study, I then got distracted with something else and let it languish for a few weeks. I’m sure that there are at least one or two more points that I would have thought of then but that I no longer remember now that time has passed and I’ve already started planning for another study.

Takeaway: Whether things go right or wrong, document lessons learned as soon as you can. Even if you only have time to outline what you want to say, do it sooner rather than later before you forget. If you’re able to, post some of your lessons learned publicly so that others can benefit and so that you can perhaps get feedback and further iterate your methods.

A foolproof way to avoid recruitment failure?

Recruiters, researchers and participants are all human, and communication is never perfect. So until the first few participants show up, you will never be 100% sure you got the right people. That being said, planning ahead and rolling with the punches as creatively as you can should give you the opportunity to turn a potential study failure due to initial recruitment problems into a successful project.

Image Credit: Jakub Jirsak / BigStockPhoto.com