A heuristic is a fallible means of solving a problem. A heuristic evaluation is a method where a usability evaluator helps to identify usability or user experience problems in a product, against an existing list of widely accepted usability heuristics. heuristic evaluation is also called as expert review in some organizations.
Many have developed heuristics in other spheres like testing and development. The earliest usability heuristics were defined by Jakob Nielsen and Rolf Molich in 1990, in their paper 'Improving a human-computer dialogue', which at the time, was mostly targeted towards web and desktop products. Over time, product ideas have evolved, technologies have become better and complex hence changing the way usability is done. heuristic evaluation followed suit.
How is heuristic evaluation done?
A simple heuristic evaluation process includes below steps:
- Identify the usability expert (in-house or external) who will review the product against a defined set of usability heuristics
- Define the scope for evaluation? Is it for specific features or just, for newer features or for entire product
- Define the environment in which evaluation needs to be performed? (live product location / test environment / sandbox)
- Direct usability expert to perform evaluation and fill an evaluation form highlighting each and every usability problem encountered at a task level.
Once the evaluation is complete, the results must be analyzed with corresponding stakeholders who might want to make appropriate decisions with respect to the product based on usability problems identified.
A typical evaluation report contains usability findings and detailed explanation of the problems encountered. In my experience, when such reports are floated across technical teams to fix these problems, they are left wondering about what they should do. For example, a usability problem like "I was unable to navigate to 'Home' from 'Settings' screen" doesn't really tell anything to the developer on how to handle this problem or provide a fix (not until he delves deeper into it). Hence, it is good to insist on the usability expert to provide feature recommendations, in addition to usability problems. This means, that for each usability problem identified, sufficient screenshots, investigation notes and subsequent recommendations (of what exactly needs to be done to fix the usability problem) are also recorded. In some cases, usability experts even include best practices of competitor apps to advocate their findings better.
One evaluator is NOT a user
Arnie Lund, once said, "Know thy user, and you are not thy user". For the same reason, usability findings found through heuristic evaluation often get discounted as 'some user's opinion that doesn't matter.' There is an additional risk of the evaluator being someone with 'not so normal' thoughts, and perhaps a wrong representative of the average user. This leads many to frown upon heuristic evaluation.
In fact, Jakob Nielsen, in his research, found that one evaluator is no good. According to him, it is good to have at least 3 to 5 evaluators who might end up finding most of the usability problems in the product. This approach also helps in fine-tuning the findings, to actually differentiate between low hanging fruits and really bad usability problems. The results are then aggregated and evaluated by an evaluation manager, who is different from the main evaluators. The impact of such a heuristic evaluation is much better than the one, done by single evaluator.
A Complimentary Approach
On few e-commerce projects, I applied a complimentary approach. While one/two evaluators provided feedback on the product, a separate team performed user testing with 15-25 users. At the end of user testing, findings from users were collated to make a 'Usability Report.' Results of both the reports would be compared by an evaluation manager who would then identify feature recommendations based on inputs provided in both reports. This approach worked really well for startups.
The success of heuristic evaluation / other complimentary approaches is defined not just by the process, but by the strength of the heuristics involved, type of information captured and the way in which it is presented to stakeholders. This is why, heuristic evaluation isn't something that can be done, 'on the fly' by anyone. It needs to be performed by experienced practitioners who are aware of their biases and present unbiased findings. Such evaluations performed by usability experts are called Expert Reviews.
In short, heuristic evaluation is done by evaluators who refer to specific heuristics and evaluate the product against them. Every usability problem found using this method is mapped against an existing heuristic based on which the evaluation was done. Expert reviews, on the other hand, are performed by subject matter experts in an informal atmosphere using a list of heuristics that may not be well-defined at all.
Update (4th Aug 2014)
My teacher, James Bach was kind enough to point out how Heuristic Evaluation is different from heuristic evaluation and throw pointers around few gaps in my understanding of heuristic evaluation. I have updated this post based on my improved understanding of the same.