Behind the Mirror: Usability and the Art of Selective Listening

Bigstockphoto.com / CoraMax Bigstockphoto.com / CoraMax

The value of usability testing is a subject to debate: does it contribute a valuable and objective perspective of a proposed design, or does it merely muddy the waters with the interjection of the uninformed opinion of a dozen people who are not particularly bright?  The answer is: it is both, and getting the value out of usability testing is similar to panning for gold in that you have to separate the trash from the treasure by understanding how to listen selectively.

A Typical Usability Disaster

Consider this scenario: there’s a piece of information you must get the user to provide, and it’s a little unusual such that the user may not understand what to type into the box.   For users who don’t understand, you have placed an icon on the screen – a white question mark inside a beveled blue button – that they can click to get some information to help them understand how to respond.

Sitting behind the one-way mirror in a usability test, you see half a dozen people whose noses crinkle when they see the question – they entirely miss the icon that would have helped them to answer it.   When they turn to the proctor conducting the test for help, he points out the icon, and they remark “Oh, I didn’t see that at all. I think that it should be red instead of blue so it would jump out at me.”

Behind the glass, a colleague hisses at you, “I told you it should have been red!”

The report comes back, declaring “Users failed to notice the help icon. Making it red would increase its visibility.”

You know that this is a bad solution.   While the color red does tend to draw attention, it’s not a panacea.   Red is reserved for serious errors, and if you use it to call attention to optional or marginally useful things, its ubiquity will dilute its power to draw attention to serious matters.   But now you have a team convinced that usability testing has scientifically proven that help icons ought to be red.

Separating Fact from Opinion

While the scenario above is entirely contrived, most designers who have been through usability testing can identify with it all too well: it’s a very painful experience, hence a very memorable one, to be compelled to do something that is clearly ill-advised and wrong and that will do more harm than good, just because a dozen people who are not designers think it’s a good idea.

The difficulty is in separating the treasure (facts) from the trash (opinions), and in the scenario above, the distinction is quite clear:

FACT: Users did not notice the help icon

OPINION: Making it red would make it noticeable

This should be clear form the scenario because users made a two-part statement, indicating the problem (fact) from a proposed solution (opinion).   Usability results are not always this definitive, and sometimes the facts are obscured by opinion. A user who simply mutters “that icon should be red” without stating the reason why is implying the problem without stating it.

The fact cannot and should not be denied: users clearly did not notice the help icon.   The opinion should be considered and discarded – because making something red isn’t the only way to make it more noticeable:

  • Moving the icon closer to the input field may make it more noticeable
  • Using a phrase such as “what this means” instead of an abstract icon might make it identifiable as advice
  • Changing the wording of the question may enable the user to answer without needing further assistance

One of these solutions may be as effective, or even more effective, than simply resorting to “red” as a way to get the user over the hurdle.   And as a designer, you need the latitude to experiment with different design solutions rather than taking the suggestions from usability at face value.

Before the Test: Setting Expectations

One effective way to regain that latitude is to set expectations before the test, particularly when there will be observers in attendance who have never witnessed a test.

A pre-test meeting for all attendees should be routine.   You gather together the team to go over the test model so that everyone knows what is going to be reviewed, point out areas of particular concern.   This is an excellent opportunity to set expectations for the usability test.

You can do this with a simple opening statement – when everyone in the room is still fresh, eager, and paying attention:   “Tomorrow, we will be usability testing this page. Usability will give us a lot of helpful information about whether this design solution accomplishes our goals, but it will also provide a lot of opinion and conjecture, so we’ll have to listen carefully and determine whether the remarks of the users reflect fact or opinion so that we come out of the lab with good information.”

Depending on how experienced the observers are, it may be necessary to provide them with a bit more information – and the scenario above can be a useful example.   “When a user mentions he didn’t see a button and thinks it should be red, we can take as fact that the button was not noticeable, but making it red is an opinion that we can consider – there may be other things we can do with the design to solve the problem.”

Those two statements, which will take less than fifteen seconds to speak at the beginning of the meeting, will do much to explain to observers the ground rules for interpreting usability test results.

After the Test: Doing Damage Control

No matter how carefully you prepare the observers for usability testing, there will be a few who don’t get it.   Maybe they didn’t attend the meeting, or maybe they wish to leverage usability to gain corroborating evidence for one of their (awful) design suggestions that you rejected.   In any case, there are going to be instances in which you have to do damage control.

The first defensive maneuver is in influencing the way in which the test report is written.   There should be a debriefing immediately after any usability test to gather peoples’ observations before writing up a formal report, and at this meeting the observations can be addressed – and in much the same way.

Largely, it’s a matter of diction. When someone says “I heard several participants say the button should be red” you can interject immediately that “I heard them make the same suggestion.” Simply by introducing the word “suggestion” to the conversation, you have laid the ground to explore other options – and since your statement is not contradicting them directly, it is less likely to trigger their defenses and start an argument.

It’s also important to make sure that the written report after the test makes this distinction as well. A good proctor will do this automatically, but not all proctors are good and even the best of them have moments in which they are rushed in their work and may fail to make the distinction clearly enough.   If you can do so without giving offense, suggest “when you write the report, could you document that as a suggestion?” Or when you see the draft of the report (which you should), you can be more emphatic: “Please document that as a suggestion. There are a few other approaches I’d like to try and I don’t want to be locked down.”

Of course, there will always be instances in which someone clings stubbornly to the notion that a suggestion made in a usability test is scientific fact – and you’ll simply have to wear them down by repeatedly explaining the difference between fact and opinion, and insisting that you try other things.

Retesting: The Last Recourse

Your last recourse for removing the trash from the treasure is a retest: once you have made adjustments to the design to address the issues, put it through another round of testing to provide that the solutions did, in fact, address the issues.

Where usability indicated a catastrophic issue – the user would bail out of the task entirely, pick up the telephone or visit a competitor’s store – doing a second usability test is highly advisable, and communicating the severity of the problem should help build support for slowing down a project to do a second round of usability testing.

Where the problem is less severe, or when time and money are so constraining you cannot go back to the lab, then testing in production may be feasible: build out two solutions (one that reflects the opinion expressed in the test, another that reflects a different approach) and subject them to an A:B or multivariate test so that you can gather hard numbers to prove which is really better in a real-world application.

The danger in doing so is that the bad idea might just work better than your alternative solution.   When that happens, you likely need to reconsider fighting for your design – acknowledging that designers are human beings, just like everyone else, and are just as prone to falling in love with a bad idea.   In truth, they are actually a lot worse about it.

 ***

Hopefully, the suggested tactics in this article will help designers approach usability testing with less anxiety. Ensuring that your teammates have the appropriate expectations, and particularly that they are coached in how to interpret the remarks of test participants to sort fact from opinion, should make the process straightforward and more productive.

About Jim Shamlin

With over two decades of experience in marketing and customer service in digital channels, Jim Shamlin maintains focus on the human element: Technology is a tool - serving customers is its purpose.

Leave a Reply

Your email address will not be published. Required fields are marked *