01 September, 2015

Wireframes Testing - Part II [How did I do it]

The first entry in this two part article talked about the fundamentals of wireframes testing and its advantages/disadvantages. In this entry, I will touch upon my journey with expert review method of testing wireframes.
Expert Review
Expert Review involves a subject matter expert, reviewing the wireframes and providing feedback, from the gamut of his/her past experience.
High Fidelity Wireframe
A wireframe which is quite close to the final product, with high level of detail and a good indication of the final proposed product with good aesthetics and functionality is high fidelity type.
Consider a high fidelity wireframe in the picture above. This wireframe includes placeholders for tabs, hyperlinks, images, search box, breadcrumbs and others. In short, this wireframe contains the layout, navigation and hints on how the product might work or behave.
Low Fidelity Wireframe
A quick and easy translation of high-level design concepts into tangible wireframes constitutes a low fidelity wireframe. In the above picture, placeholders are defined at a very high level, providing information on layout and structure of how the product might appear on low fidelity wireframes.
Problem Context
An year ago, I happened to test high-fidelity wireframes for a web-based product used by call center folks to sort incoming service requests and process them into appropriate queues. The product sounds simple, until the time you hear that the analyst has to pick each service request, convert it into a format that another automated system can understand and push it into different queues depending on the type of service request. Each analyst has to process at least 100 per day and the current system was too un-friendly to let analysts work productively, without making mistakes. Hence, the need for re-design!
In their quest to help analysts use the product better, the development team wanted feedback on the wireframes they had developed, which they hoped would fix some of the challenges the analysts were facing, if not all.
Wireframes Testing using Expert Review Method
High Fidelity wireframes have information architecture and content strategy, fairly sorted out, although wireframes are early work products. This means that in addition to layout and navigation, placement of information and presentation of content are described to good degree that is good enough to make a call on whether the product conveys what it is built for.
 A low fidelity wireframe above has few placeholders within the layout. Once can provide feedback only on the structure of the layout, positioning of placeholder elements and possibly the titles used. Beyond that, this is a pure skeleton of how the product looks like and has little scope for review.
On the other hand, a high fidelity wireframe provides better scope to review not just the layout, but a major part of the product itself.
User Interface Elements
Validating the need for UI elements is extremely useful while reviewing wireframes. This is when, one can make choices of selecting sliders or pickers over dropdowns or accordion views, based on the context of usage or make other appropriate choices. UI elements include:
  • Landing Screen (Look & Feel)
  • Header / Footer information
  • Title (Browser Title and Page/Screen Title)
  • Labels and other Tech Jargon
  • Logo / Buttons / Icons
  • Images
  • Text Fields
  • Settings (Login / Sign In / Logout)
  • Date / Copyright Format
  • Placement of Scrollbars, Dropdown menus
  • Accordion Views
  • Others
  • Buttons / Links as visual cues
  • Workflows
  • Presentation
  • Alignment
  • Size and Style
  • Existing functionality – features that depict existing functionality
  • Missing functionality – features that may be missing, although, highly relevant to the context of the product.
Above information may or may not be available to full degree in the wireframes. Depending on the type of wireframes provided, it would be good to review them and provide appropriate feedback. Feedback can be provided at two levels:
For each item (element based feedback. For e.g. a particular dropdown may be positioned wrongly on the landing screen)
At a product level (Navigation could be streamlined better w.r.t ‘Translation’ feature)
Challenges I faced
One of the biggest challenges with reviewing wireframes was the fact that transitions, interactions and other subtleties, are not visible and need a further level of probing/questioning to get clarity. Although, at wireframing stage, we might not worry much at this stage, it always helps to outline interactions and behavior of the product early on. At each screen level, I would put a list of all interactions a feature might have with other features and design how interactions can happen. This way, many flows can get sorted out upfront.
Wireframing, either on paper or using software is much cheaper, faster and offer better benefits like portability and accessibility compared to a working prototype. Today, ‘Go-to-market’ cycles are shrinking with lesser time to design products. This means that spending several man years on programming an interactive prototype is a costly affair. A key deciding factor to create an ecosystem that believes in wireframes begins with creating great wireframes that communicate well and iterate on them based on inputs from different kinds of users.  
Wireframes have personally helped me gain design insight and find usability problems early; which means we save *some* time, *some* effort and *some* money that might have been spent on building *big-failure* products.
How have wireframes helped you?

10 August, 2015

Wireframes Testing - Part I

A wireframe is a rough skeletal guide for the layout of a website or an app, done with pen/paper or using wireframing software.  Sometimes, wireframe is also known as a screen blueprint.
Wireframes are usually created to understand the layout, interface, navigation and functionality within the product and how they are stitched together. They are low-fidelity work products; this means they lack graphics, color, and other elements of visual design, for most part. 

Why Should You Test Wireframes?

Frank Lloyd Wright, once quoted, “You can use an eraser on the drafting table or a sledgehammer on the construction site.” Wireframing is one of the most valuable tools for usability testing, early in the cycle. A lot of problems can be fished out, early on, even if users can play around with skeletal wireframes. Early information in turn helps teams fix gaps early on, allowing them to build better products.

How To Test Wireframes?

Testing wireframes is an age-old concept, employed by very few organizations. Before the explosion of startups, people often chucked everything in SDLC to create a working product that can bring money quickly. However, with concepts like Kanban, Lean and other methodologies, having a minimal viable product with good UX design became an obvious ask. This is where some teams, started getting their wireframes tested. There are three ways of testing wireframes:
1. User Testing
In this method, uses are invited to test interactive/non-interactive wireframes and their feedback is captured either by asking them a series of questions about the wireframes or designing a few tasks they can execute and provide feedback.
2. Remote User Testing
This method is similar to user testing method, except that this is not done face to face with users, but using video conferencing and other online collaboration tools.
3. Expert Review
This method employs a subject matter expert to review/test wireframes and provide detailed analysis.

Wireframing Tools

Wireframes can be hand drawn sketches on paper/whiteboard or they can be produced by using wireframing software – some of which are free.
Axure is a desktop app that runs on Windows and Mac. It has powerful features not just to create low-fidelity wireframes, but also highly interactive mockups.
Balsamiq is a traditional wireframing tool with a focus on “rough sketches” that are close to hand drawn drawings.
The Pencil Project
Pencil is free and easy to learn with creating simple sketches
There are many other tools in the market. One has to pick a tool that suits their context. In recent times, Axure has become a go to tool for creating ‘walking wireframes’ that are dynamic by design and mock real functionality, interactions and navigation, all in a single place. Axure goes a step ahead in creating high-fidelity wireframes / mockups depending on the target platform – be it web, or mobile.

Advantages / Disadvantages of Wireframing

  • Communicate innovative design ideas quickly
  • Facilitate early feedback mechanism to clients
  • Provides an opportunity to fix critical bugs/problems in wireframes early in SDLC
  • Easy to make changes to wireframes, as compared to making them on a live product
  • Wireframes is limited to the power of the tool used
  • Interactions within wireframes may not be self-explanatory at all times
  • Poor collaboration during wireframing stage with corresponding scrum teams can destroy the benefits of creating wireframes in the first place

Design First, Then Prototype Approach

In the olden days, wireframes or other prototypes were conceptualized first. Then design ideas would come into play on what design elements needed to be added to the product. Some tools would have existing limitations hence limiting the implementation of unique design ideas. Today, better designs are more engaging, playing a key role, in areas like, preventing user abandonment and so forth. Having said that, it is important to backtrack and Design First and later figure out how to create wireframes or prototypes as per the design.
The next entry in this two part article will touch upon expert review method of testing wireframes and how it benefits teams.

23 July, 2015

heuristic evaluation - What's That?

A heuristic is a fallible means of solving a problem. A heuristic evaluation is a method where a usability evaluator helps to identify usability or user experience problems in a product, against an existing list of widely accepted usability heuristics. heuristic evaluation is also called as expert review in some organizations.
Many have developed heuristics in other spheres like testing and development. The earliest usability heuristics were defined by Jakob Nielsen and Rolf Molich in 1990, in their paper 'Improving a human-computer dialogue', which at the time, was mostly targeted towards web and desktop products. Over time, product ideas have evolved, technologies have become better and complex hence changing the way usability is done. heuristic evaluation followed suit. 

How is heuristic evaluation done?

A simple heuristic evaluation process includes below steps:
  1. Identify the usability expert (in-house or external) who will review the product against a defined set of usability heuristics
  2. Define the scope for evaluation? Is it for specific features or just, for newer features or for entire product
  3. Define the environment in which evaluation needs to be performed? (live product location / test environment / sandbox)
  4. Direct usability expert to perform evaluation and fill an evaluation form highlighting each and every usability problem encountered at a task level.
Once the evaluation is complete, the results must be analyzed with corresponding stakeholders who might want to make appropriate decisions with respect to the product based on usability problems identified.

Usability Report

A typical evaluation report contains usability findings and detailed explanation of the problems encountered. In my experience, when such reports are floated across technical teams to fix these problems, they are left wondering about what they should do. For example, a usability problem like "I was unable to navigate to 'Home' from 'Settings' screen" doesn't really tell anything to the developer on how to handle this problem or provide a fix (not until he delves deeper into it). Hence, it is good to insist on the usability expert to provide feature recommendations, in addition to usability problems. This means, that for each usability problem identified, sufficient screenshots, investigation notes and subsequent recommendations (of what exactly needs to be done to fix the usability problem) are also recorded. In some cases, usability experts even include best practices of competitor apps to advocate their findings better. 

One evaluator is NOT a user

Arnie Lund, once said, "Know thy user, and you are not thy user". For the same reason, usability findings found through heuristic evaluation often get discounted as 'some user's opinion that doesn't matter.' There is an additional risk of the evaluator being someone with 'not so normal' thoughts, and perhaps a wrong representative of the average user. This leads many to frown upon heuristic evaluation.
In fact, Jakob Nielsen, in his research, found that one evaluator is no good. According to him, it is good to have at least 3 to 5 evaluators who might end up finding most of the usability problems in the product. This approach also helps in fine-tuning the findings, to actually differentiate between low hanging fruits and really bad usability problems. The results are then aggregated and evaluated by an evaluation manager, who is different from the main evaluators. The impact of such a heuristic evaluation is much better than the one, done by single evaluator.

A Complimentary Approach

On few e-commerce projects, I applied a complimentary approach. While one/two evaluators provided feedback on the product, a separate team performed user testing with 15-25 users. At the end of user testing, findings from users were collated to make a 'Usability Report.' Results of both the reports would be compared by an evaluation manager who would then identify feature recommendations based on inputs provided in both reports. This approach worked really well for startups.

Expert Reviews

The success of heuristic evaluation / other complimentary approaches is defined not just by the process, but by the strength of the heuristics involved, type of information captured and the way in which it is presented to stakeholders. This is why, heuristic evaluation isn't something that can be done, 'on the fly' by anyone. It needs to be performed by experienced practitioners who are aware of their biases and present unbiased findings. Such evaluations performed by usability experts are called Expert Reviews. 
In short, heuristic evaluation is done by evaluators who refer to specific heuristics and evaluate the product against them. Every usability problem found using this method is mapped against an existing heuristic based on which the evaluation was done. Expert reviews, on the other hand, are performed by subject matter experts in an informal atmosphere using a list of heuristics that may not be well-defined at all.
Update (4th Aug 2014)
My teacher, James Bach was kind enough to point out how Heuristic Evaluation is different from heuristic evaluation and throw pointers around few gaps in my understanding of heuristic evaluation. I have updated this post based on my improved understanding of the same. 

29 June, 2015

Recruiting Users for User Testing

Mobile User Personas
I have conducted user testing sessions for several clients, while I was in the services space in different capacities. When I say, 'different capacities', it means some of those were in-house users and some were external. Some testing projects were on a small scale where fewer than 10 users were involved, while others had several scores of users. Irrespective of the scale, few questions that often popped in my head were, "Who are the 'RIGHT' kind of users?", "How many users are good enough?" and so forth. At times, I wondered if the users I hired represented the best representative sample of the real user base spread across globally. Recruiting users is the most difficult and critical part of user testing. Here is how I approached this challenge:

App Context

Suppose, you are recruiting users for testing a 'yet to be released' mobile yoga app that caters to Ashtanga Yoga aspirants. There are several formats of yoga in the market, especially in the western world. Hence, it is important to note that many Ashtanga Yoga practitioners believe that theirs is the most authentic form of yoga ever. Which users from this large community should we consider for user testing of this particular yoga app? Who do we recruit? How do we recruit? On what basis?

Finding the 'RIGHT' kind of users? 

Identifying the right kind of users is a challenging task. Many organizations follow the 'hallway testing' approach where users are randomly chosen as though there were walking on the hallway. These users may not be the best possible sample given diversity factors like geographies, culture, age group, profession, tech-savvy-ness and so forth. It is always good to know who are the users and what are their key characteristics. Without this information, we might just react like horses with blinkers on.

How to recruit users

In above mentioned context, consumers of this app are yoga practitioners, teachers, students and general public. These people may or may not be the users we are looking for. Few of them may not even know how to use a mobile app. Some might be extremely tech-savvy and represent a fairly good sample. Recruiting users depends on asking the right questions depending on the context of the product. The user testing team can design a 'User Recruitment Questionnaire' that helps to screen users and shortlist the most suitable candidates.

User Recruitment Questionnaire

User Recruitment Questionnaire, also known as screener templates, in its simplest form, has three categories:
1. General Questions
This sections asks general questions related to user demography such as:
  • Gender
  • Age Group
  • Occupation / Business Sector
  • Nationality
  • Income Group
2. Product Context-Specific Questions
This section includes questions specific to yoga as the product under test deals with yoga training:
  • Do you teach Yoga?
  • Since how long, have you been teaching yoga?
  • What specializations do you have in Yoga?
  • How often do you teach yoga in a week?
Note: Note that above questions address only the practitioners and teachers at this point. You can include more specifically targeted to recruit yoga students as well.
3. Tech-savvy-ness
  • Are you a smartphone user?
  • How often do you access internet on your smartphone
  • Do you have technical knowledge of using mobile devices?
  • What is your smartphone model (Device Name, Manufacturer and Model)
  • Have you used any yoga apps in the past?
This recruitment questionnaire can be distributed to potential users via E-mail, Google forms or Online survey. Once user responses are available,we can choose which kind of users we want from this list based on the product context and the user demography we are targeting. 

How many users are good enough

Naive user testing teams start with 1-2 users. Few others say 5-10 users are adequate. I have had good output with 30 users on a few projects. The question really is, 'How many users are good enough?' Jakob Nielsen, a User Advocate and Principal of Nielsen Norman Group has done extensive research in User Testing and thinks that 5 users is a good enough number to start with. As per Nielsen, 5 users can find as many usability / user experience problems as compared to a larger number of participants. 
Regardless of whether user recruitment is done through Online communities / Friends & family / Beta / Private Beta, using this approach can be beneficial. Things might not work as expected the first time around. It might take a couple iterations to implement this approach, make mistakes and then fix them before you start to see positive results. Nevertheless, it's worth trying and failing, than doing nothing at all.
What approach does your team take to recruit users? How well has it worked for you?

11 June, 2015

Here's what you did wrong - Recoverability Testing and UX Connection

A few weeks ago, I was on the ground floor of my office, when the elevator arrived. I pressed '4' while I continued chatting with my colleague. We reached 4th floor and noticed the lift didn't stop. I was under some imaginary pressure to prove to my colleague that I had pressed '4' indeed, as she stared at me. While I was explaining what just happened, she said, 'The elevator behavior is right. You are wrong." Apparently, if one pressed '4' and the elevator goes to basement or other lower levels and returns to ground floor, the switches are reset. Was it human error or system error? 
Most failures are evil because they tell us, we did the wrong thing. They tell us, that it's something WE did that resulted in failure. They tell us that WE screwed up. According to Don Norman, over 90% of industrial accidents are blamed on human error. You know, if it was 5%, we might believe it. But when it is virtually always, shouldn't we realize that it is something else? The systems were wrong? Perhaps.
Sidney Dekker, believes, that "human error is not a cause of series accidents, but a symptom of trouble, deeper inside a system". We are humans. We cannot be accurate and precise all the time. We are pre-occupied, we are in different states of mind at different times and we have our own way of living our lives and dealing with challenges. As a result, we commit mistakes. Is that really a failure on our part, or the system was not designed intuitively enough, to be able to avoid mistakes from humans or, even, guide the user when mistakes happen.
Murphy's law states that "Anything that can go wrong, will go wrong". While things can go wrong, helping users recover from such situations can go a long way in building credibility and loyalty with the user. 
Recoverability of errors is a key element to be considered while designing products. When errors occur, the following five elements can repair the damage (to some degree), the error might have caused to users in first place.
1. Provide visibility to the user of what was done
An error occurs when user did something that the system doesn't know how to handle. When users make mistakes and get no feedback, they're completely lost. For e.g, sending an email that's eaten up by a virus, but the recipient doesn't know a thing about it. When error occurs, it's good to tell the user, exactly what the user did a few moments ago. This way, it might help the user realize whether his actions were right or not and make amends accordingly.
2. Do/Show/Tell the user what went wrong
Once the error has occurred, the user needs to know what really went wrong in the first place. The message displayed to the user should be clear enough to state what actually went wrong. This information is additional to providing visibility above where user is told what he did vs what went wrong after that.
3. Indicate how the user can reverse unwanted outcome
Users are least interested in geeky or innovative error messages. They just want to get out of the error situation as soon as possible. Including error codes like 'Type 2 error number 10000345 occurred' is least informative and/or useful. The error message displayed should tell the user how to reverse the error or what is the next best thing to do, to recover from this error. In short, how can the user go back to base state of the application where he left off before the error occurred is critical for the user to know. Additionally, giving useful advice to the user to fix the problem is good. For e.g, On an e-commerce app, just saying, a book went out of stock is definitely worse when compared to providing 'Notify' feature that notifies you when the book is back in stock.
4. If reversibility is not possible, indicate this to the user
In some cases, reversing an error is not possible. In such a case, it is best to indicate the user to force-close the application and start from scratch or from a specific location in the app. Take an example of password fields. When a user enters the most simplest password, an error message throws with a big list of instructions for a strong password. Instead, if the user is warned upfront about these instructions in the form of a label below the password field, the hassle could be avoided. 
5. Preserve User Data
The app must be able to preserve user's data at all times and never ever corrupt or leak confidential information. Period!
An error that can be made will be made.  For e.g. if you miss-spell 'Murph's Law' in Google, it displays results, but also displays 'Showing results for 'Murphy's Law'. It turns an error into a good feeling. Transforming an error situation to actually helping the user is an intelligent way to deal with an error. Here's a message from Don Norman about error messages: "Error messages punish people for not behaving like machines. It is time we let people behave like people. When a problem arises, we should call it machine error, not human error: the machine was designed wrong, demanding that we conform to its peculiar requirements. It is time to design and build machines that conform to our requirements. Stop confronting us: Collaborate with us."
Graceful Recoverability from errors defines a new avenue for organizations to create great user experiences.
 What do you think?