23 July, 2015

Heuristic Evaluation - What's That?


I heard about the term 'Heuristic' for the first time in 2009, when oracles and heuristics were just being introduced by my teacher Pradeep Soundararajan through his workshops and blogs. 
At the time, I didn't understand the depth of this word, as much. As with any concept, I thought things will be understood with time and age. As I explored the UX, I was glad to see UX world adapt this term long before other industries had even learned about it.
Heuristic Evaluation
A heuristic is a guideline. For example, 'Don't drink and drive' is a heuristic. While this is a guideline, it neither means that all people who drink and drive will *always* meet with accidents nor that all people who don't drink will *never* meet with accidents. A heuristic is a simple rule of thumb, which is fallible, and which is dependent on the context in which it is put to use. 
Heuristic Evaluation, is a usability testing method that helps to identify usability or user experience problems in a product, usually done by a usability expert with great domain knowledge. It is also called as UX expert review or Usability audit in some organizations. The earliest usability heuristics were defined by Jakob Nielsen and Rolf Molich in 1990, in their paper 'Improving a human-computer dialogue', which at the time, was mostly targeted towards web and desktop products. Over time, product ideas have evolved, technologies have become better and complex hence changing the way usability is done. Heuristic evaluation followed suit.

How is Heuristic Evaluation done?

A simple heuristic evaluation process includes below steps:
  1. Identify the usability expert (in-house or external)
  2. Define the scope for evaluation? Is it for specific features or just, for newer features or for entire product
  3. Identify the target user group, specific user persona and the characteristics associated with that user persona
  4. Define tasks specific to user persona selected previously
  5. Define the environment in which evaluation needs to be performed? (live product location / test environment / sandbox)
  6. Direct usability expert to perform evaluation and fill an evaluation form highlighting each and every usability problem encountered at a task level.
Once the evaluation is complete, the results must be analyzed with corresponding stakeholders who might want to make appropriate decisions with respect to the product based on usability problems identified.

Heuristic Evaluation Report

A typical evaluation report contains usability findings and detailed explanation of the problems encountered. In my experience, when such reports are floated across technical teams to fix these problems, they are left wondering about what they should do. For example, a usability problem like "I was unable to navigate to 'Home' from 'Settings' screen" doesn't really tell anything to the developer on how to handle this problem or provide a fix (not until he delves deeper). Hence, it is good to insist on the usability expert to provide feature recommendations, in addition to usability problems. This means, that for each usability problem identified, sufficient screenshots, investigation notes and subsequent recommendations (of what exactly needs to be done to fix the usability problem) are also recorded. In some cases, usability experts even include best practices of competitor or comparable apps to provide stronger evidence. Effort like this, makes evaluation report directly consumable by technical teams easily.

One evaluator is NOT a user

Arnie Lund, once said, "Know thy user, and you are not thy user". For the same reason, usability findings found through heuristic evaluation often get discounted as 'some user's opinion that doesn't matter.' There is an additional risk of the evaluator being someone with 'not so normal' thoughts, and perhaps a wrong representative of the average user. This leads many to frown upon heuristic evaluation.
In fact, Jakob Nielsen, in his research, found that one evaluator is no good. According to him, it is good to have at least 3 to 5 evaluators who might end up finding most of the usability problems in the product. This approach also helps in fine-tuning the findings, to actually differentiate between low hanging fruits and really bad usability problems. The results are then aggregated and evaluated by an evaluation manager, who is different from the main evaluators. The impact of such a heuristic evaluation is much better than that done by single evaluator.

A Complimentary Approach

On few e-commerce projects, I applied a complimentary approach. While one/two evaluators performed heuristic evaluation on the product, a separate team performed user testing with 15-25 users. At the end of user testing, findings from users were collated to make a 'User Experience Report.' A similar report would be created by the evaluators. Results of both the reports would be compared by an evaluation manager who would then identify feature recommendations based on inputs provided in both reports. This approach worked really well for startups in lean model as well as large enterprises.

A Word of Caution

The success of heuristic evaluation / user testing / other complimentary approaches is defined not just by the process, but by the kind of user personas selected, types of tasks performed, effectiveness of task execution, type of information captured and the way in which it is presented to stakeholders. This is why, heuristic evaluation isn't something that can be done, 'on the fly' by anyone. It needs to be performed by experienced practitioners who are aware of their biases and present unbiased findings.

29 June, 2015

Recruiting Users for User Testing



Mobile User Personas
I have conducted user testing sessions for several clients, while I was in the services space in different capacities. When I say, 'different capacities', it means some of those were in-house users and some were external. Some testing projects were on a small scale where fewer than 10 users were involved, while others had several scores of users. Irrespective of the scale, few questions that often popped in my head were, "Who are the 'RIGHT' kind of users?", "How many users are good enough?" and so forth. At times, I wondered if the users I hired represented the best representative sample of the real user base spread across globally. Recruiting users is the most difficult and critical part of user testing. Here is how I approached this challenge:

App Context

Suppose, you are recruiting users for testing a 'yet to be released' mobile yoga app that caters to Ashtanga Yoga aspirants. There are several formats of yoga in the market, especially in the western world. Hence, it is important to note that many Ashtanga Yoga practitioners believe that theirs is the most authentic form of yoga ever. Which users from this large community should we consider for user testing of this particular yoga app? Who do we recruit? How do we recruit? On what basis?

Finding the 'RIGHT' kind of users? 

Identifying the right kind of users is a challenging task. Many organizations follow the 'hallway testing' approach where users are randomly chosen as though there were walking on the hallway. These users may not be the best possible sample given diversity factors like geographies, culture, age group, profession, tech-savvy-ness and so forth. It is always good to know who are the users and what are their key characteristics. Without this information, we might just react like horses with blinkers on.

How to recruit users

In above mentioned context, consumers of this app are yoga practitioners, teachers, students and general public. These people may or may not be the users we are looking for. Few of them may not even know how to use a mobile app. Some might be extremely tech-savvy and represent a fairly good sample. Recruiting users depends on asking the right questions depending on the context of the product. The user testing team can design a 'User Recruitment Questionnaire' that helps to screen users and shortlist the most suitable candidates.

User Recruitment Questionnaire

User Recruitment Questionnaire, also known as screener templates, in its simplest form, has three categories:
1. General Questions
This sections asks general questions related to user demography such as:
  • Gender
  • Age Group
  • Occupation / Business Sector
  • Nationality
  • Income Group
2. Product Context-Specific Questions
This section includes questions specific to yoga as the product under test deals with yoga training:
  • Do you teach Yoga?
  • Since how long, have you been teaching yoga?
  • What specializations do you have in Yoga?
  • How often do you teach yoga in a week?
Note: Note that above questions address only the practitioners and teachers at this point. You can include more specifically targeted to recruit yoga students as well.
3. Tech-savvy-ness
  • Are you a smartphone user?
  • How often do you access internet on your smartphone
  • Do you have technical knowledge of using mobile devices?
  • What is your smartphone model (Device Name, Manufacturer and Model)
  • Have you used any yoga apps in the past?
This recruitment questionnaire can be distributed to potential users via E-mail, Google forms or Online survey. Once user responses are available,we can choose which kind of users we want from this list based on the product context and the user demography we are targeting. 

How many users are good enough

Naive user testing teams start with 1-2 users. Few others say 5-10 users are adequate. I have had good output with 30 users on a few projects. The question really is, 'How many users are good enough?' Jakob Nielsen, a User Advocate and Principal of Nielsen Norman Group has done extensive research in User Testing and thinks that 5 users is a good enough number to start with. As per Nielsen, 5 users can find as many usability / user experience problems as compared to a larger number of participants. 
Regardless of whether user recruitment is done through Online communities / Friends & family / Beta / Private Beta, using this approach can be beneficial. Things might not work as expected the first time around. It might take a couple iterations to implement this approach, make mistakes and then fix them before you start to see positive results. Nevertheless, it's worth trying and failing, than doing nothing at all.
What approach does your team take to recruit users? How well has it worked for you?

11 June, 2015

Here's what you did wrong - Recoverability Testing and UX Connection

A few weeks ago, I was on the ground floor of my office, when the elevator arrived. I pressed '4' while I continued chatting with my colleague. We reached 4th floor and noticed the lift didn't stop. I was under some imaginary pressure to prove to my colleague that I had pressed '4' indeed, as she stared at me. While I was explaining what just happened, she said, 'The elevator behavior is right. You are wrong." Apparently, if one pressed '4' and the elevator goes to basement or other lower levels and returns to ground floor, the switches are reset. Was it human error or system error? 
Most failures are evil because they tell us, we did the wrong thing. They tell us, that it's something WE did that resulted in failure. They tell us that WE screwed up. According to Don Norman, over 90% of industrial accidents are blamed on human error. You know, if it was 5%, we might believe it. But when it is virtually always, shouldn't we realize that it is something else? The systems were wrong? Perhaps.
Sidney Dekker, believes, that "human error is not a cause of series accidents, but a symptom of trouble, deeper inside a system". We are humans. We cannot be accurate and precise all the time. We are pre-occupied, we are in different states of mind at different times and we have our own way of living our lives and dealing with challenges. As a result, we commit mistakes. Is that really a failure on our part, or the system was not designed intuitively enough, to be able to avoid mistakes from humans or, even, guide the user when mistakes happen.
Murphy's law states that "Anything that can go wrong, will go wrong". While things can go wrong, helping users recover from such situations can go a long way in building credibility and loyalty with the user. 
Recoverability of errors is a key element to be considered while designing products. When errors occur, the following five elements can repair the damage (to some degree), the error might have caused to users in first place.
1. Provide visibility to the user of what was done
An error occurs when user did something that the system doesn't know how to handle. When users make mistakes and get no feedback, they're completely lost. For e.g, sending an email that's eaten up by a virus, but the recipient doesn't know a thing about it. When error occurs, it's good to tell the user, exactly what the user did a few moments ago. This way, it might help the user realize whether his actions were right or not and make amends accordingly.
2. Do/Show/Tell the user what went wrong
Once the error has occurred, the user needs to know what really went wrong in the first place. The message displayed to the user should be clear enough to state what actually went wrong. This information is additional to providing visibility above where user is told what he did vs what went wrong after that.
3. Indicate how the user can reverse unwanted outcome
Users are least interested in geeky or innovative error messages. They just want to get out of the error situation as soon as possible. Including error codes like 'Type 2 error number 10000345 occurred' is least informative and/or useful. The error message displayed should tell the user how to reverse the error or what is the next best thing to do, to recover from this error. In short, how can the user go back to base state of the application where he left off before the error occurred is critical for the user to know. Additionally, giving useful advice to the user to fix the problem is good. For e.g, On an e-commerce app, just saying, a book went out of stock is definitely worse when compared to providing 'Notify' feature that notifies you when the book is back in stock.
4. If reversibility is not possible, indicate this to the user
In some cases, reversing an error is not possible. In such a case, it is best to indicate the user to force-close the application and start from scratch or from a specific location in the app. Take an example of password fields. When a user enters the most simplest password, an error message throws with a big list of instructions for a strong password. Instead, if the user is warned upfront about these instructions in the form of a label below the password field, the hassle could be avoided. 
5. Preserve User Data
The app must be able to preserve user's data at all times and never ever corrupt or leak confidential information. Period!
An error that can be made will be made.  For e.g. if you miss-spell 'Murph's Law' in Google, it displays results, but also displays 'Showing results for 'Murphy's Law'. It turns an error into a good feeling. Transforming an error situation to actually helping the user is an intelligent way to deal with an error. Here's a message from Don Norman about error messages: "Error messages punish people for not behaving like machines. It is time we let people behave like people. When a problem arises, we should call it machine error, not human error: the machine was designed wrong, demanding that we conform to its peculiar requirements. It is time to design and build machines that conform to our requirements. Stop confronting us: Collaborate with us."
Graceful Recoverability from errors defines a new avenue for organizations to create great user experiences.
 What do you think?

01 June, 2015

How to Test User Experience

This article was originally published on TechWell.
User experience (UX) involves the range of emotions a user feels while using a product or service. The product or service may have amazing features and capabilities, but if it fails to delight the user, the person will hardly use it. United Airlines is setting an aspirational target for its customers' UX. It's striving to create an in-flight experience that is “legroom friendly," "online friendly," and "shut-eye friendly.”
Understanding how users feel involves becoming aware of man-machine interactions. This knowledge then can be used to improve the overall user experience. Sadly, many of those who talk about UX as though it’s a set of tools and approaches often forget about the human side of products. A range of tests can be performed while a user is engaging with a piece of software to ensure that the user is never forgotten at any point of the development process.
Emotional Response Test
Users don’t have scripts to follow in one hand while using a product or service in the other hand. By probing users and recording their emotions, ranging from amusement to annoyance, UX teams and testers can gather invaluable information about what makes a product great—and what makes it a nuisance.
User experience professional Robert Hoekman Jr. has a list of tenets on the value of user experience strategy. One of the tenets is "A user’s experience belongs to the user. An experience cannot be designed. It can, however, be influenced. A designer’s job is to be the influencer."
First Impressions Test
What can you tell about people or websites in a short time? A lot. Tests like the Five Second test show that student evaluations given after the students are shown only a few seconds of video are indistinguishable from evaluations from students who actually had the professor for an entire semester. Additionally, visual appeal, navigation, and click tests give inputs about users’ early impressions of products and websites, which can be used to understand what makes a delightful user experience.
User Pain Points
What truly delights users is implicit most times. Bill Gates was absolutely correct when he remarked that unhappy customers are a great source of information for learning about UX. You can gather user pain points from complaints and warnings by talking to users more often and by observing them using websites and software, and then recording their emotions. These days, it’s easy to get customer feedback at the drop of a hat through social media.
As Steve Jobs said, design is "not just what it looks like and feels like. Design is how it works.” Testers can use a variety of heuristics to tell the UX team what does and doesn’t work for users so that the entire project team knows exactly what gives their customers the greatest experience possible.
How do you test User Experience?

08 February, 2015

Competitor Analysis: A Simple How-To Guide To Get Started


Competitor Analysis is an assessment of the strengths and weaknesses of current and potential competitors of the product in hand. This analysis highlights not just positive and negative aspects of the product, but also potential opportunities and threats. Some organizations vaguely call it product ‘SWOT’ analysis.

Competitor analysis can be done at multiple levels. One can just pick a super set of all features available across the products that are comparable and map each product’s capability against that feature. Additionally, comparison can be done at a technology level. For e.g. comparing Teradata with other data warehouse products. Some comparisons can be done across specific market segments as well.
  1.      Product Features
  2.     Technology
  3.      Market segment
  4.      Geographical areas
  5.      Others
Competitor Profiling
Answering the question, “Why do you want to perform competitor analysis?” is a critical aspect of performing competitor analysis. This in turn leads to identify several key indicators based on which this exercise can be carried out. Few indicators that are instrumental in performing competitor analysis are listed below:
  1.      Industry / Domain
  2.      List out potential competitors / competitor products
  3.      Users of competitor apps (to understand why these users like the competitor more)
  4.      Competitor’s market share and why they may be ahead
  5.      Competitor’s strategy (sales, marketing, branding, promotions, advertising)
  6.      Social media presence for mass market apps
  7.      Areas they excel at where the product is lagging way behind
  8.      Cost / Distribution factors
An organization gains a competitive advantage only when it outperforms its competitors in a way that matters to the customer. It is hence important to ensure that the product has key differentiators that are clearly a hit with customers. This doesn’t mean that one has to rain all the features into single product and offer a gigantine mixture of all solutions in one place. Dell, for example, is known for mass customizations depending on stakeholders needs.

Another area is the costing aspect of the product itself. Customers are constantly wondering if they can find a great quality product that solves a problem or unmet need at a very cheap price. They are constantly on the lookout for organizations that are cheaper in pricing. This is where extensive research needs to be done and arrived at a suitable price that caters to different market segments, customer personas and economies.

The last aspect of gaining a competitive advantage is w.r.t the stickiness of the product. “What’s sticky about your app?” can make or break any product. This is different from possessing differentiating features. For e.g., “What’s so addictive about Facebook that people would chuck filling timesheets and chat with friends online at the cost of organizations money?”. Products need to be capable of getting customers addicted with its inherent purpose.

If someone is smarter than you, make him your friend. If he can’t be your friend, buy him out and kill him. A series of acquisitions and mergers in similar market segments is a testimony to the fact that many organizations aim to create a monopoly in the market either by buying them out or owning their technology to move forward faster.

Different Approaches to Competitor Analysis
There are several approaches to performing competitor analysis. I have listed a few I have personally explored and found it successful in communicating apt feedback to stakeholders

  Star Ratings based
       Features/Scenarios can be rated by giving star ratings – single star meaning poor
       and five stars meaning outstanding

       Points based
       Features/Scenarios can be rated using a points system of 1 to 5 per feature or scenario
      
      Subjective feedback
      Some stakeholders prefer detailed subjective feedback as it’s easy to understand
      underlying analysis using this feedback instead of using symbols/numbers i.e., stars and points

Implementation Example
Consider a simple example of performing competitor analysis of Flipkart.com website with Amazon. In using a mix of approaches (2) and (3) listed above. One can start with identifying the purpose for this exercise. For the benefit of this article, let’s say the purpose is to "Identify features where flipkart.com lags behind amazon.in"

To accomplish this, make a list of all features available on both websites. Analyze every feature in detail on both websites and associate a rating to it. You can also provide subjective feedback explaining which feature is better and/or why you think that feature is better.

At the end of the exercise, this is how the competitor analysis document snippet looks like.



In some cases, you can also make a list of different tasks and measure their efficiency using metrics like ‘success of getting task done’, ‘time taken to complete the task’, ‘customer satisfaction level’ and so forth.

Customer Touch points testing is one metric to capture for main product and the competitor product. This helps analyze how the organization handles user complaints, escalations or queries. Media like call, email, chat, service requests, feedback and recommendation tools and so forth.

Once this activity is done for all features, an overall rating can be arrived at as shown below.


Additionally, we can provide a summary report of our findings and elaborate on Product Stickiness.

Competitor analysis should be initiated with a well defined objective. Once it is complete, respective stakeholders must work towards fixing the gaps identified and contribute towards building a better product. There are different approaches that can be used to perform competitor analysis including as simple a method as visiting your competitor as a potential client and getting insider news :-). There are professional organizations which do a great job of performing such analysis, which comes at a high cost, yet valuable enough. Which method one chooses is less important over what will be done with the results in the end. In my experience, few organizations invested lot of time and performed competitor analysis, only to pack those results and trash them in the most safest product file. If you want to perform this analysis, be sure and be serious.

To summarize, anyone can learn to do competitor analysis in simple steps mentioned above and evolve it over a period of time. The results can be shared with appropriate stakeholders about the pros and cons in the product at hand and make suitable improvements in subsequent releases.

Reference
http://en.wikipedia.org/wiki/Competitor_analysis