A heuristic evaluation is a usability inspection method for computer software that helps to identify usability problems in the user interface (UI) design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the “heuristics”).
Heuristic evaluations usually are conducted by a small set (one to three) of evaluators. The evaluators independently examine a user interface and judge its compliance with a set of usability principles. The result of this analysis is a list of potential usability issues or problems.
The usability principles, also referred to as usability heuristics, are taken from published lists. Ideally, each potential usability problem is assigned to one or more heuristics to help facilitate fixing the problem. As more evaluators are involved, more true problems are found.
The method can provide some quick and relatively inexpensive feedback to designers. Feedback can be obtained early in the design process. Assigning the correct heuristic can help suggest the best corrective measures to designers.
It requires a certain level of knowledge and experience to apply the heuristics effectively. Trained usability experts are sometimes hard to find and can be expensive. Multiple evaluators are recommended and results must be aggregated. The evaluation may identify more minor issues and fewer major issues.
Rolf Molich and Jakob Nielsen (1990) developed a set of heuristics that are probably the most used in the field of interface design. Nielsen later (1994), after evaluating several sets of heuristics, came up with a better set:
Visibility of system status – The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Match between system and the real world – The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom – Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Consistency and standards – Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall – Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use – Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help and documentation – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.
- Molich, R. and Nielsen, J., Improving a human- computer dialogue, Communications of the ACM, 33(3), 338-348, (1990).
- Nielsen, J., Enhancing the explanatory power of usability heuristics, CHI’94 Conference Proceedings, (1994).
Automate unwanted workload
- Free cognitive resources for high-level tasks.
- Eliminate mental calculations, estimations, comparisons, and unnecessary thinking.
Reduce uncertainty; display data in a manner that is clear and obvious.
Fuse data; reduce cognitive load by bringing together lower level data into a higher-level summation.
Present new information with meaningful aids to interpretation:
- Use a familiar framework, making it easier to absorb.
- Use everyday terms, metaphors, etc.
Use names that are conceptually related to function.
- Attempt to improve recall and recognition.
Group data in consistently meaningful ways to decrease search time.
Limit data-driven tasks:
- Reduce the time spent assimilating raw data.
- Make appropriate use of color and graphics.
Include in the displays only that information needed by the user at a given time.
Provide multiple coding of data when appropriate.
Practice judicious redundancy.
Reference: Cognitive engineering principles for enhancing human – computer performance, Gerhardt-Powals, J., International Journal of Human-Computer Interaction, 8(2), 189-211, (1996).
Theoretically, the heuristics are related to criteria that, if improved, could make a positive difference in the product’s usability. Unfortunately, the “usability problems” identified in a heuristic evaluation differ substantially from those obtained in performance testing. Only the Gerhardt-Powals set of heuristics has been validated.
In an expert review, the heuristics are assumed to have been previously learned and internalized by the evaluators. That is to say, evaluators do not use a clear-cut set of heuristics. As a result, the expert review tends to be less formal, and usually there is no requirement to assign a specific heuristic to each potential problem.