An evaluation method in which people work through a set of representative tasks and ask questions about the task as they go
Evaluating whether the order of prompts in a system reflects the way people cognitively process tasks.
To get quick and early feedback on whether a design solution is easy for a new or infrequent user to learn, and why it is or isn’t easy. This method is useful for catching big issues at any stage in the design process when you don’t have access to real users, but it is not a substitute for user evaluation.
- This is a usability inspection method that evaluates a system’s anticipated ease-of-use without instruction, coaching, or training.
- Each step of the interaction with the system can be assessed as a step that either moves the individual closer to or further from their goal. •
- Evaluators ask four questions for each step in the sequence:
- Will users want to produce whatever effect the action has?
- Will users see the control (button, menu, label, etc.) for the action?
- Will users recognize that the control will produce the effect that they want?
- Will users understand feedback they get, so they can confidently continue on to the next action?
- It should be used with usability testing to uncover different classes of design issues and problems.
- Define the users of the product and conduct a context of use analysis.
- Determine what tasks and task variants are most appropriate for the walkthrough.
- Assemble a group of evaluators (you can also perform an individual cognitive walkthrough).
- Develop the ground rules for the walkthrough. Some groundrules you might consider are:
- No discussions about ways to redesign the interface during the walkthrough.
- Designers and developers will not defend their designs.
- Participants are not to engage in Twittering, checking emails, or other behaviors that would distract from the evaluation.
- The facilitator will remind everyone of the groundrules and note infractions during the walkthrough.
- Conduct the actual walkthrough
- Provide a representation of the interface to the evaluators.
- Walk through the action sequences for each task from the
perspective of the “typical” users of the product. For each step in the
sequence, see if you can tell a credible story based on the following
questions (Wharton, Rieman, Lewis, & Polson, 1994, pp. 106):
- Will the user try to achieve the right effect?
- Will the user notice that the correct action is available?
- Will the user associate the correct action with the effect that the user is trying to achieve?
- If the correct action is performed, will the user see that progress is being made toward the solution of the task?
- Record success stories, failure stories, design suggestions, and problems that were not the direct output of the walkthrough, assumptions about users, comments about the tasks, and other information that may be useful in design. Use a standard form for this process.
- Bring all the analysts together to develop a shared understanding of the identified strengths and weaknesses.
- Brainstorm on potential solutions to any problems identified.
- May be done without first hand access to users.
- Unlike some usability inspection methods, takes explicit account of the user’s task.
- Provides suggestions on how to improves learnability of the system
- Can be applied during any phase of development.
- Is quick and inexpensive to apply if done in a streamlined form.
- The value of the data is limited by the skills of the evaluators.
- Tends to yield a relatively superficial and narrow analysis that focuses on the words and graphics used on the screen.
- The method does not provide an estimate on the frequency or severity of identified problems.
- Following the method exactly as outlined in the research is labor intensive.
Further reading: http://www.usabilitybok.org/cognitive-walkthrough