Mike Kuniavsky

Observing the User Experience: A Practitioner's Guide to User Research (Interactive Technologies)

Notify me when the book’s added
To read this book, upload an EPUB or FB2 file to Bookmate. How do I upload a book?
This book is currently unavailable
855 printed pages
Have you already read it? How did you like it?
👍👎

Quotes

  • Артём Тереховhas quoted3 years ago
    As you assemble your report, consider these common-sense tips for breaking bad news:

    • Don’t present an entirely negative report. There’s bound to be some positive comments you can find to soften a harsh set of findings. Don’t invent positivity where there is none—but by the same token, don’t assume that your job is only to present only problems. Celebrate successes where you can.

    • Put the focus on real people. Demonstrate that the source of critical findings is not the usability research team. Quotations from users, especially in a video-highlights clip, establish the credibility of your findings and give them more weight. It’s harder to doubt the existence of a problem when a video clip shows a succession of real people struggling with your product.

    • Be constructive. We know we told you to be wary of making design edicts. But that doesn’t mean you should never make any recommendations. If a problem has an obvious fix, especially if suggested by evaluators, say so! Even if your recommendation isn’t taken up, you have at least acknowledged that there are potential solutions.

    The problem with consistently bringing only bad news is twofold. If your stakeholders are already fearful about the product, it can make the problems seem hopeless. Second, you can damage the credibility of usability research in general. Usability can become synonymous with criticism, and people understandably end up resenting it. The goal is for stakeholders to welcome your reports, not dread them.

    Luckily, it’s usually not that hard to balance calls for improvement with praise for success. Let’s take a look at a usability report prepared for the Wikipedia Foundation by the design and research firm gotomedia. (Note: We’ve edited this report down because our space is limited, but what you see here remains basically identical to what was delivered.)
  • Артём Тереховhas quoted3 years ago
    Extracting Trends
    Having grouped all the observations, go through the groups and consolidate them, separating the groups of unrelated topics. Throw away those that only have one or two individual observations. For each group, try to categorize the problem in a single short sentence, with a couple of sentences to fully describe the phenomenon. Explain the underlying cause as much as possible, separating the explanation of the phenomenon from your hypothesis of its cause. Concentrate on describing the problem, its immediate impact on the user experience, and the place where the problem occurred. Be very careful when suggesting solutions. Ultimately, the development team knows more about the technology and the assumptions that went into the product, and the responsibility for isolating underlying causes and finding solutions is theirs. Your recommendations should serve as a guide to where solutions could exist, not edicts about what must be done.

    Describe the severity of the problem from the user’s perspective, but don’t give observations numerical severity grades. If a shorthand for the characterization of observations is desired or requested, categorize the observations in terms of the effects they have on the user experience, rather than assigning them an arbitrary number. Such an effect scale could be “Prevents an activity,” “Causes confusion,” “Does not match expectations,” “Seen as unnecessary.”

    It’s easy to turn user severity measures into project development priorities. This is usually inappropriate. What’s most important to a user’s success with the product is not necessarily what’s most important to the product’s success. Inform the product team of problem severity from the user’s perspective. Problem severity can tell you how to determine project priorities, but the two aren’t the same.

    Once all this is done, you should have a list of observations, hypotheses for what caused the phenomena, and quotations that reinforce and summarize the observations. Some of those observations will likely please your stakeholders. But usability reports can be controversial, as well. Nobody likes hearing bad news, but it’s almost unavoidable with usability tests. No product is perfect all the time, and usability tests are designed to find trouble spots. By this time, you will likely have a good sense of which, if any, of your observations will be controversial or difficult for your stakeholders to accept
  • Артём Тереховhas quoted3 years ago
    Organizing Observations
    First, read through all the notes once to get a feeling for the material. Look for repeated concerns as well as multiple issues that may have their origin in common underlying problems.

    Then put all the observations into a pile (literally, or in a single large document). Opening a separate document in a word processor, go through each observation and group it with other similar observations in the new document. Similarity can be in terms of superficial similarity (“Term not understood”), feature cluster (“Shopping cart problems”), or in terms of underlying cause (“Confusing information architecture”). Group the observations with the most broadly sweeping, underlying causes. Pull quotations out and group them with the causes that they best illustrate.

    Much as with other forms of qualitative data analysis (see Chapter 15 for a deeper discussion) organizing usability testing information and extracting trends can be done in a group with the development team (and other stakeholders, as appropriate). This allows the group to use its collected knowledge to flesh out the understanding of the problem and to begin working on solutions

On the bookshelves

  • Артём Терехов
    Research
    • 7
  • wikupedia
    UX
    • 1
fb2epub
Drag & drop your files (not more than 5 at once)