I heard about the term 'Heuristic' for the first time in 2009, when oracles and heuristics were just being introduced by my teacher Pradeep Soundararajan through his workshops and blogs.
At the time, I didn't understand the depth of this word, as much. As with any concept, I thought things will be understood with time and age. As I explored the UX, I was glad to see UX world adapt this term long before other industries had even learned about it.
A heuristic is a guideline. For example, 'Don't drink and drive' is a heuristic. While this is a guideline, it neither means that all people who drink and drive will *always* meet with accidents nor that all people who don't drink will *never* meet with accidents. A heuristic is a simple rule of thumb, which is fallible, and which is dependent on the context in which it is put to use.
Heuristic Evaluation, is a usability testing method that helps to identify usability or user experience problems in a product, usually done by a usability expert with great domain knowledge. It is also called as UX expert review or Usability audit in some organizations. The earliest usability heuristics were defined by Jakob Nielsen and Rolf Molich in 1990, in their paper 'Improving a human-computer dialogue', which at the time, was mostly targeted towards web and desktop products. Over time, product ideas have evolved, technologies have become better and complex hence changing the way usability is done. Heuristic evaluation followed suit.
How is Heuristic Evaluation done?
A simple heuristic evaluation process includes below steps:
- Identify the usability expert (in-house or external)
- Define the scope for evaluation? Is it for specific features or just, for newer features or for entire product
- Identify the target user group, specific user persona and the characteristics associated with that user persona
- Define tasks specific to user persona selected previously
- Define the environment in which evaluation needs to be performed? (live product location / test environment / sandbox)
- Direct usability expert to perform evaluation and fill an evaluation form highlighting each and every usability problem encountered at a task level.
Once the evaluation is complete, the results must be analyzed with corresponding stakeholders who might want to make appropriate decisions with respect to the product based on usability problems identified.
Heuristic Evaluation Report
A typical evaluation report contains usability findings and detailed explanation of the problems encountered. In my experience, when such reports are floated across technical teams to fix these problems, they are left wondering about what they should do. For example, a usability problem like "I was unable to navigate to 'Home' from 'Settings' screen" doesn't really tell anything to the developer on how to handle this problem or provide a fix (not until he delves deeper). Hence, it is good to insist on the usability expert to provide feature recommendations, in addition to usability problems. This means, that for each usability problem identified, sufficient screenshots, investigation notes and subsequent recommendations (of what exactly needs to be done to fix the usability problem) are also recorded. In some cases, usability experts even include best practices of competitor or comparable apps to provide stronger evidence. Effort like this, makes evaluation report directly consumable by technical teams easily.
One evaluator is NOT a user
Arnie Lund, once said, "Know thy user, and you are not thy user". For the same reason, usability findings found through heuristic evaluation often get discounted as 'some user's opinion that doesn't matter.' There is an additional risk of the evaluator being someone with 'not so normal' thoughts, and perhaps a wrong representative of the average user. This leads many to frown upon heuristic evaluation.
In fact, Jakob Nielsen, in his research, found that one evaluator is no good. According to him, it is good to have at least 3 to 5 evaluators who might end up finding most of the usability problems in the product. This approach also helps in fine-tuning the findings, to actually differentiate between low hanging fruits and really bad usability problems. The results are then aggregated and evaluated by an evaluation manager, who is different from the main evaluators. The impact of such a heuristic evaluation is much better than that done by single evaluator.
A Complimentary Approach
On few e-commerce projects, I applied a complimentary approach. While one/two evaluators performed heuristic evaluation on the product, a separate team performed user testing with 15-25 users. At the end of user testing, findings from users were collated to make a 'User Experience Report.' A similar report would be created by the evaluators. Results of both the reports would be compared by an evaluation manager who would then identify feature recommendations based on inputs provided in both reports. This approach worked really well for startups in lean model as well as large enterprises.
A Word of Caution
The success of heuristic evaluation / user testing / other complimentary approaches is defined not just by the process, but by the kind of user personas selected, types of tasks performed, effectiveness of task execution, type of information captured and the way in which it is presented to stakeholders. This is why, heuristic evaluation isn't something that can be done, 'on the fly' by anyone. It needs to be performed by experienced practitioners who are aware of their biases and present unbiased findings.