bookmate game
Mike Kuniavsky

Observing the User Experience: A Practitioner's Guide to User Research (Interactive Technologies)

Notify me when the book’s added
To read this book, upload an EPUB or FB2 file to Bookmate. How do I upload a book?
  • Артём Тереховhas quoted3 years ago
    As you assemble your report, consider these common-sense tips for breaking bad news:

    • Don’t present an entirely negative report. There’s bound to be some positive comments you can find to soften a harsh set of findings. Don’t invent positivity where there is none—but by the same token, don’t assume that your job is only to present only problems. Celebrate successes where you can.

    • Put the focus on real people. Demonstrate that the source of critical findings is not the usability research team. Quotations from users, especially in a video-highlights clip, establish the credibility of your findings and give them more weight. It’s harder to doubt the existence of a problem when a video clip shows a succession of real people struggling with your product.

    • Be constructive. We know we told you to be wary of making design edicts. But that doesn’t mean you should never make any recommendations. If a problem has an obvious fix, especially if suggested by evaluators, say so! Even if your recommendation isn’t taken up, you have at least acknowledged that there are potential solutions.

    The problem with consistently bringing only bad news is twofold. If your stakeholders are already fearful about the product, it can make the problems seem hopeless. Second, you can damage the credibility of usability research in general. Usability can become synonymous with criticism, and people understandably end up resenting it. The goal is for stakeholders to welcome your reports, not dread them.

    Luckily, it’s usually not that hard to balance calls for improvement with praise for success. Let’s take a look at a usability report prepared for the Wikipedia Foundation by the design and research firm gotomedia. (Note: We’ve edited this report down because our space is limited, but what you see here remains basically identical to what was delivered.)
  • Артём Тереховhas quoted3 years ago
    Extracting Trends
    Having grouped all the observations, go through the groups and consolidate them, separating the groups of unrelated topics. Throw away those that only have one or two individual observations. For each group, try to categorize the problem in a single short sentence, with a couple of sentences to fully describe the phenomenon. Explain the underlying cause as much as possible, separating the explanation of the phenomenon from your hypothesis of its cause. Concentrate on describing the problem, its immediate impact on the user experience, and the place where the problem occurred. Be very careful when suggesting solutions. Ultimately, the development team knows more about the technology and the assumptions that went into the product, and the responsibility for isolating underlying causes and finding solutions is theirs. Your recommendations should serve as a guide to where solutions could exist, not edicts about what must be done.

    Describe the severity of the problem from the user’s perspective, but don’t give observations numerical severity grades. If a shorthand for the characterization of observations is desired or requested, categorize the observations in terms of the effects they have on the user experience, rather than assigning them an arbitrary number. Such an effect scale could be “Prevents an activity,” “Causes confusion,” “Does not match expectations,” “Seen as unnecessary.”

    It’s easy to turn user severity measures into project development priorities. This is usually inappropriate. What’s most important to a user’s success with the product is not necessarily what’s most important to the product’s success. Inform the product team of problem severity from the user’s perspective. Problem severity can tell you how to determine project priorities, but the two aren’t the same.

    Once all this is done, you should have a list of observations, hypotheses for what caused the phenomena, and quotations that reinforce and summarize the observations. Some of those observations will likely please your stakeholders. But usability reports can be controversial, as well. Nobody likes hearing bad news, but it’s almost unavoidable with usability tests. No product is perfect all the time, and usability tests are designed to find trouble spots. By this time, you will likely have a good sense of which, if any, of your observations will be controversial or difficult for your stakeholders to accept
  • Артём Тереховhas quoted3 years ago
    Organizing Observations
    First, read through all the notes once to get a feeling for the material. Look for repeated concerns as well as multiple issues that may have their origin in common underlying problems.

    Then put all the observations into a pile (literally, or in a single large document). Opening a separate document in a word processor, go through each observation and group it with other similar observations in the new document. Similarity can be in terms of superficial similarity (“Term not understood”), feature cluster (“Shopping cart problems”), or in terms of underlying cause (“Confusing information architecture”). Group the observations with the most broadly sweeping, underlying causes. Pull quotations out and group them with the causes that they best illustrate.

    Much as with other forms of qualitative data analysis (see Chapter 15 for a deeper discussion) organizing usability testing information and extracting trends can be done in a group with the development team (and other stakeholders, as appropriate). This allows the group to use its collected knowledge to flesh out the understanding of the problem and to begin working on solutions
  • Артём Тереховhas quoted3 years ago
    UK user experience design and evaluation consulting company, uses the following range to measure how long people take to perform a task:

    0. Fail

    1. Succeed very slowly in a roundabout way

    2. Succeed a little slowly

    3. Succeed quickly

    Most of the time, you don’t need any more precision, since making critical comparisons only requires an order-of-magnitude measure. Each scale should have three or five steps (don’t use two, four, or six since it’s hard to find a middle value; don’t use more than five because it tends to get confusing) and a separate value for failure.

    Make a grid for each participant consisting of the task metrics you’re going to collect. As you watch the videos, note the severity in each cell (when appropriate, define severity using the same language and scale that is used by the development team to define how serious code bugs are). For the fork tasks, the following table would reflect one person’s performance.

    Then, when compiling the final analysis, create a table for each metric that summarizes the whole user group’s experience. For the completion time metric, two summary tables could look as in Table 11.5 and Table 11.6.

    Table 11.5. Tina’s Task Performance

    Table 11.6. Task Performance Time Measures Summary

    The average numbers, although not meaningful in an absolute context, provide a way to compare tasks to each other and between designs.

    Note feature requests and verbatim quotations from the evaluators, especially ones that encapsulate a particular behavior (e.g., “I don’t understand what “Forkopolis” means, so I wouldn’t click there”). Feature requests are often attempts to articulate a problem that the evaluator can’t express in any other way. However, they can also be innovative solutions to those same problems, so capture them, regardless
  • Артём Тереховhas quoted3 years ago
    Conducting usability studies with eye tracking is a highly specialized skill and can be very costly. It requires pricey equipment, significant computing power, and extensive training of the moderator. Because eye-tracking data are “noisy” (i.e., people’s eye movement patterns are unpredictable and vary for many reasons), eye-tracking studies usually require more participants than conventional usability studies. The large quantities of numerical data they generate require a knowledgeable analyst to produce useful insights.

    You should consider conducting eye-tracking studies if all the following are true:

    1. You have a specific objective or task you want to enable, and the task has a clear measure of success.

    2. Your objective has to do with the way users visually process a website or software interface. For example, you want them to notice and read the lead article, or browse more of the products on a catalog page.

    3. You have already determined that the content is of interest to users, conducted interview-based usability testing and A/B testing (See Chapter 16 for more on A/B testing.), and optimized your page design based on the results, and you have not seen the improvements you seek.
  • Артём Тереховhas quoted3 years ago
    Don’t take extensive notes during the test. This allows you to focus on what the user is doing and probe particular behaviors. Also, the participants won’t jump to conclusions about periods of frantic scribbling, which they often interpret as an indicator that they just did something wrong.

    • Take notes immediately after, writing down all interesting behaviors, errors, likes, and dislikes. Discuss the test with any observers for 10–20 minutes immediately after and take notes on their observations, too
  • Артём Тереховhas quoted3 years ago
    Usability test observer instructions
    Listen. As tempting as it is to immediately discuss what you’re observing, make sure to listen to what people are really saying. Feel free to discuss what you’re seeing, but don’t forget to listen.

    Usability tests are not statistically representative. If three out of four people say something, that doesn’t mean that 75% of the population feels that way. It does mean that a number of people may feel that way, but it doesn’t mean anything in the context of your larger user population.

    Don’t take every word as gospel. These are just the views of a couple of people. It’s great if they have strong opinions, but trust your intuition in judging their importance unless there’s significant evidence otherwise. So if someone says, “I hate the green,” that doesn’t mean that you change the color (though if everyone says, “I hate the green,” then it’s something to research further).

    People are contradictory. Listen to how people are thinking about the topics and how they come to conclusions, not necessarily their specific desires. A person may not realize that two desires are mutually incompatible, or he or she may not care. Be prepared to be occasionally bored or confused. People’s actions aren’t always interesting or insightful.

    Don’t expect revolutions. If you can get one or two good ideas out of each usability test, then it has served its purpose.

    Watch for what people don’t do or don’t notice as much as you watch what they do and notice.

    For in-room observers, add the following instructions:

    Feel free to ask questions when the moderator gives you an explicit opportunity. Ask questions that do not imply a value judgment about the product one way or another. So instead of asking, “Is this the best-of-breed product in its class?” ask “Are there other products that do what this one does? Do you have any opinions about any of them?”

    Do not mention your direct involvement with the product. It’s easier for people to comment about the effectiveness of a product when they don’t feel that someone with a lot invested in it is in the same room.

    If the observers are members of the development team, encourage them to wait until they’ve watched all the participants before generalizing and designing solutions. People naturally want to start fixing problems as soon as possible, but teams need to determine the context, magnitude, and prevalence of a problem before expending energy to fix it
  • Артём Тереховhas quoted3 years ago
    Give observers some instructions on acceptable behavior
  • Артём Тереховhas quoted3 years ago
    Getting as many members of the development team to observe the tests is one of the fastest ways to relate the findings of the test and win them over
  • Артём Тереховhas quoted3 years ago
    A major component to effective usability tests is to get people to say what they’re thinking as they’re thinking it. Introduce the technique upfront, but also emphasize it during the actual interview
fb2epub
Drag & drop your files (not more than 5 at once)