We’re all biased. This process is biased. In fact, all usability testing is biased because it tries to synthesize the experience of a user — it’s amazing how difficult it is to reproduce something as trivial as using a webpage in a way that actually helps us make that experience better! So, we’re not going to avoid biases entirely. But it does help to understand some of the core biases:
The Hawethorne Effect
This is simply the effect that awareness of being observed has on a volunteer’s actions. Usability testing is meant to approximate the experience of using a website, but because we’ve invited someone in to help us do that, they already know that what they’re doing is synthetic. So while they might do their best to embody the role of a typical user, the problem is just that: they’ll do their best! Everything you ask them to do will probably be done better and with more patience and effort than we could ever expect of a typical user.
Task Selection Bias
This one’s easy. If you asked them about it, it must be important! Again, what you’ll observe is that the volunteer will try very hard to complete a task — much harder than anyone normally would. So we know that the tasks we’re asking our volunteers to complete are essential to the site. They know this too, and their efforts will scale accordingly.
Confirmation Bias
Basically, without a procedure like the one I’ve shared with you before, we’d be inclined to test what we already think is broken. But this, of course, is a blind spot. It’s often the unknowns that are more critical. In the course of your tasks, pay attention to things that come up that you didn’t expect, like bugs!
There are plenty more. Sometimes I feel like our design is only as good as our self-awareness, which means that understanding our fundamental blindness to reality is our real work. But that’s another post for another blog entirely…