Extracting Trends
Having grouped all the observations, go through the groups and consolidate them, separating the groups of unrelated topics. Throw away those that only have one or two individual observations. For each group, try to categorize the problem in a single short sentence, with a couple of sentences to fully describe the phenomenon. Explain the underlying cause as much as possible, separating the explanation of the phenomenon from your hypothesis of its cause. Concentrate on describing the problem, its immediate impact on the user experience, and the place where the problem occurred. Be very careful when suggesting solutions. Ultimately, the development team knows more about the technology and the assumptions that went into the product, and the responsibility for isolating underlying causes and finding solutions is theirs. Your recommendations should serve as a guide to where solutions could exist, not edicts about what must be done.
Describe the severity of the problem from the user’s perspective, but don’t give observations numerical severity grades. If a shorthand for the characterization of observations is desired or requested, categorize the observations in terms of the effects they have on the user experience, rather than assigning them an arbitrary number. Such an effect scale could be “Prevents an activity,” “Causes confusion,” “Does not match expectations,” “Seen as unnecessary.”
It’s easy to turn user severity measures into project development priorities. This is usually inappropriate. What’s most important to a user’s success with the product is not necessarily what’s most important to the product’s success. Inform the product team of problem severity from the user’s perspective. Problem severity can tell you how to determine project priorities, but the two aren’t the same.
Once all this is done, you should have a list of observations, hypotheses for what caused the phenomena, and quotations that reinforce and summarize the observations. Some of those observations will likely please your stakeholders. But usability reports can be controversial, as well. Nobody likes hearing bad news, but it’s almost unavoidable with usability tests. No product is perfect all the time, and usability tests are designed to find trouble spots. By this time, you will likely have a good sense of which, if any, of your observations will be controversial or difficult for your stakeholders to accept