A quick way to find common, large usability problems on a website. An agreed-upon set of usability best practices used to evaluate an interface
- This is a usability inspection method that asks evaluators to assess an interface against a set of agreed-upon best practices or usability “rules of thumb.”
- Unlike usability tests with actual users, these evaluations enlist team members to inspect and fix baseline usability problems before user testing.
- When heuristics are applied repeatedly during an iterative design process, the principles will become more intuitive and usability problems easier to detect.
- Novices trained on heuristics and evaluators familiar with the subject and usability practices conduct evaluations.
- The method can help detect critical but missing dialogue elements early in the design process, as well as heuristics that are working well.
- When used in the middle phases of the design process, even with low-fidelity prototypes, evaluations can make later usability tests more effective.
How to do it
- Decide which aspects of a product and what tasks you want to review. For most products, you cannot review the entire user interface so you need to consider what type of coverage will provide the most value
- Decide which heuristics will be used.
- Recruit a group of three to five people familiar with evaluation methods. These people are not necessarily designers, but are familiar with common usability best practices. They are usually not users.
- Ask each person to individually create a list of “heuristics” or general usability best practices. Examples of heuristics from Nielsen’s “10 Usability Heuristics for User Interface Design” include:
- The website should keep users informed about what is going on, through appropriate feedback within reasonable time.
- The system should speak the user’s language, with words, phrases and concepts familiar to the user, rather than system-oriented terms.
- Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue.
- Ask each person to evaluate the website against their list and write down possible problems.
- After individual evaluations, gather people to discuss what they found and prioritize potential problems.
- Inexpensive relative to other evaluation methods.
- Intuitive, and easy to motivate potential evaluators to use the method
- Advanced planning not required.
- Evaluators do not have to have formal usability training.
- Can be used early in the development process
- Faster turnaround time
- As originally proposed by Nielsen and Molich, the evaluators would have knowledge of usability design principles, but were not usability experts (Nielsen & Molich, 1990). However, Nielsen subsequently showed that usability experts would identify more issues than non-experts, and “double experts” – usability experts who also had expertise with the type of interface (or the domain) being evaluated – identified the most issues (Nielsen, 1992). Such double experts may be hard to come by, especially for small companies (Nielsen, 1992).
- Individual evaluators identify a relatively small number of usability issues . Multiple evaluators are recommended since a single expert is likely to find only a small percentage of problems. The results from multiple evaluators must be aggregated.
- Heuristic evaluations and other discount methods may not identify as many usability issues as other usability engineering methods, for example, usability testing.
- Heuristic evaluation may identify more minor issues and fewer major issues than would be identified in a think-aloud usability test.
- Heuristic reviews may not scale well for complex interfaces . In complex interfaces, a small number of evaluators may not find a majority of the problems in an interface and may miss some serious problems.
- Does not always readily suggest solutions for usability issues that are identified
- Biased by the preconceptions of the evaluators
- In heuristic evaluations, the evaluators only emulate the users – they are not the users themselves.
- Heuristic evaluations may be prone to reporting false alarms – problems that are reported that are not actual usability problems in application