22 November, 2014

Managed Crowdtesting - An Augmented Approach to Testing




Testing Industry is going through an ocean of changes. Evolving IT landscape is getting more consumer-driven, open-source and cloud based. With increasing complexity in hardware and software choices, its getting harder and harder to get good testing done on infinite number of platforms, devices, test configurations and a wide variety of user personas.

What is Crowdtesting?

Crowdtesting or Crowdsourced testing is the real-world testing in an on-demand model, delivered through highly skilled & qualified, geographically distributed professionals over a secure private platform.
Crowdtesting solves traditional testing problems as follows:
  • Same brains testing software
  • Fixed/Limited availability of resources
  • Lack of user perspective
  • Lack of fresh ideas


Let me give you a short overview of Managed Crowdtesting which is an extension to how crowdtesting can be done better.

Managed Crowdtesting

A qualified Project Manager, who is typically a proven community leader or a person from the client/the platform company, designs or reviews test strategy, and approves or amends them to cater to client’s specific testing requirements. Each project includes an explanation and access to a forum where bugs and issues are discussed and additional questions can be asked. Testers submit documented bug reports and are rated based on the quality of their reports. The amount the testers earn increases as they report more bugs that are approved by the project manager. The community combines aspects of collaboration and competition, as members work to finding solutions to the stated problem.

Advantages of Crowdtesting

1. Representative scenarios from the real user base
2. Tight feed-back loop with rapid feedback processing and agility
3. Comprehensiveness in use cases, platforms, tools, browsers, testers, etc. that is very hard to replicate in an in-house test lab
4. Cost efficiency
5. Diversity among the pool of testers lends to extensive testing
6. Reduced time to test, time to market and total cost of ownership as most defects can be identified in relatively short time, which leads to significant reductions in maintenance costs

Disadvantages of Crowdtesting

1. Governance efforts around security, exposure and confidentiality when offering a community project to wide user base for testing
2. Project management challenges that stem from the testers’ diverse backgrounds, languages and
experience levels
3. Quality assurance efforts to verify and improve bug reports, identify and eliminate bug duplicates and
false alarms
4. Equity and equality constraints in the reward mechanism with remuneration as a function of the quality of contributions that meets a prescribed minimum standard


Where does Crowdtesting fit best?

Mobile Organizations

With Google, Apple and Microsoft practically giving away their development tools for free, there is a growing developer base creating mobile apps and responsive web sites for android, iOS and Windows platforms. But, it’s easy to underestimate the costs of building and monetizing an app successfully. One way to save costs is to consider crowd testing.
Crowdtesting is most suitable for applications that are user-centric. Users of Mobile and Gaming applications in particular expect the apps to work on thousands of devices from different manufacturers, device sizes, resolutions, network carriers and locations. This calls for not just a group of testers to test on handful of devices and configurations, but for an ocean of users with this kind of diversity.

Growth Stage Startups

With Lean Startup revolution catching up in different parts of the world, startup founders today have gotten smarter by releasing cheaper or free versions of products in beta stage. A few years ago, beta testing happened with a select group that was guarded from the general public. Now, many startups are opening up early versions of applications to users to gather quick and critical feedback. They want to fail faster and learn quicker.
Some applications can be tested in few locations only, some need specific cable connections and network carriers, few others need a specific network connection like 4G LTE or higher, some have needs for specific language users and so on. In such cases, any user might not work. Specific users will be needed in which case an engaged and managed Crowdtesting community comes into play.

Enterprises

Large enterprises can benefit from crowd-sourced testing by simulating a large user base to understand usage patterns and improve on feedback, while ensuring their applications run smoothly on a number of different devices, operating systems, browsers and language versions. Applications with high defect exposure factor post release are good candidates for Crowdtesting.
For e.g. Microsoft released the beta version of its Office 2010 productivity suite, which was downloaded and tested by 9 million people who provided 2 million valuable comments and insights, resulting in substantial product improvements.


Augmenting your test approach

I have heard several businessmen and sales people speak about why offshore testing will work or distributed testing won’t work or how crowdtesting is a magic bullet and so on. With more than a decade of experience in the industry, I can confidently say that there is no magic bullet. Every organization maintains certain ethos with comfortable work culture, talent pool of people and scores of technical debts. Software testing is the least of problems for many organizations, be it 50 years ago, today or even 50 years later. Because, according to customer, software testing consumes money, it doesn’t bring money. In such an overhead situation, selling software testing solutions bundled in different packages to customers as “the most innovative solution of the century” no longer makes sense.
The need of the hour in providing testing solutions is to pitch testing models/solutions as an augmented approach to testing.

Scenario 1 – An organization which employs traditional testing methodologies approaches you for testing

This organization, let’s say, has a mature testing process in place and also has a “Test Center-of- Excellence” for all the testing / QA work that gets done within the organization. How would you add value to them? It’s important to understand the needs of the customer, identify the pain points they are going through as a result of not testing or doing poor testing and pitch a model that fits best for them. Customer might take a couple of test cycles to gauge if that model works well or not.
In this case, if bringing a fresh pair of eyes, then suggesting several new team members in the company/team to initiate testing helps. If it needs to be done on a larger scale, Crowdtesting can be an option. Note that it is not the only option, but one of the options.

Scenario 2 - An organization is looking for diversity in test configurations and devices

A large organization with web or mobile applications accessed from different operating systems, browsers and browser versions, multiple mobile devices, different platforms like Android, iOS, Windows OS, several manufactures, different screen sizes and resolutions. From a cost and time perspective, organizations often find it hard to test on a variety of test configurations. Such a context is suitable for Crowdtesting where a professional testing community works in a Bring Your Own Device (BYOD) model and tests the application, hence giving broader device/platform coverage.

Scenario 3 - An organization wants to solve its regression testing problem

Many legacy applications have a need for regression testing. While new features are conceptualized and implemented, the pain of maintaining existing features from breaking is a big pain. This risk is further aggravated given the number of operating systems, browsers, mobile devices and other test configurations. Regression testing candidates are a great fit for Crowdtesting where the crowd is capable of regressing on a variety of platforms and test configurations within a short period of time.


What does the future hold?

Crowdsourced testing, clearly, has its advantages and limitations. It cannot be considered as a panacea for all testing requirements and the power of the crowd should be diligently employed. The key to great Crowdtesting would be to use it prudently depending on the tactical and strategic needs of the organization that seeks Crowdsourced testing services. It is important for the organization to embrace the correct model, identify the target applications, implement Crowdtesting, explore them for few test cycles; monitor test results and custom make the model to suit their needs.


Why am I talking about Crowdtesting?

I recently joined PASS Technologies, which is into software testing services – Offshore testing and Crowdtesting. While I have been in the Offshore Testing Services for about 11 years now, this is my first experience of how Crowdtesting works. I see a lot of benefits for organizations to adopt Crowdtesting and augment their testing services, be it services based organizations or product based.


What’s in it for a Customer

·         On-Demand Testing
·         Better Test Coverage
·         Faster Test Results
·         Cheaper
·         Scalable Solution


What’s in it for a Tester?

Testers get to “Earn, Learn and Grow” using passbrains platform. Tester benefits include
1. Earning for approved bugs on a per bug payout model
2. Networking with some of the coolest testers who are on our community
3. Recognition as star testers in the community

What are you waiting for? Register on www.passbrains.com as a customer or tester and let us know how passbrains can help you.

References

Content references include thoughts and ideas from Dieter Spiedel, Mithun Sridharan and Mayank Mittal.


07 October, 2014

Interviewed by A1QA



Hello Readers,

I was recently interviewed by A1QA, a Software Quality Assurance company based out of US and UK for their blog. This is the first time, I have been interviewed on technical aspects of my work to great depth. I cover topics like Mobile Apps Testing, User Experience Testing and Crowdtesting for most part.


I am sharing here, just in case, some of you find it useful.

Regards,
Pari

My Interviews

I have been interviewed by many folks. Yet, nothing is in one place so far. This is a placeholder blog post for all my interviews published so far on the World Wide Web.

September 2014


August 2014


March 2014


July 2012

An interview with Parimala Hariprasad : 1 year @ Moolya


June 2011

Parimala Hariprasad - Part 1 @ IT Files
Parimala Hariprasad - Part 2 @ IT Files
Parimala Hariprasad - Part 3 @ IT Files


May 2011

Interview with Parimala Hariprasad @ Testing Circus


December 2010

Interview with Parimala HariprasadSoftware Test and Performance Collaborative 


Regards,
Parimala Hariprasad

06 August, 2014

Speaking at CAST 2014 & The Saturday Night Project


Over last 2 years, I have done crazy things. I attended a Design conference, Developer conference, Story Telling workshop by well known Brand Expert and Master Storyteller Ameen Haque, took a design course tutored by Don Norman on 'The Design Of Everyday Things' and also took up some singing classes with my daughter (Oh! I just bray!). I also did several things at work that I have not done before. All along, I went with the flow, picked up any challenge that appeared exciting and enjoyed what I did.

The Saturday Night Project

I spent more than an year studying design and user experience between 2013 and 2014. I was amused by what User Experience Design field had to offer when I led a User Experience testing project at my previous job. The experience has been exhilarating. I learned about design, about applying design concepts to testing and about life itself. All this, happened on saturday nights after I put my kids to sleep. It's been a wonderful journey. These learnings are very special because a lot of effort went into it.
It's time I share these learnings with the world. Which better conference than at CAST 2014!

CAST 2014

CAST is a very special conference for me (apart from Let's Test and Bug DeBug). I have several friends who have attended this conference and told me that I MUST attend this conference, even if it is at my own expenses. Two years ago, I told myself that someday I will present at CAST. My dream is coming true this year. My family thinks I am crazy to be spending a bomb on this trip. For me, it's worth my time, effort and money for the wonderful testers I am going to meet at this conference. I am excited about this conference.

I am presenting a paper "Testing Lessons From The Design Thinking World" at CAST 2014 at New York. You can check out my Abstract HERE.

Date :: 12th August 2014, 4.50 PM
Venue :: KC909, Kimmel Center, New York

Key Highlights of My CAST Talk
Emotions Testing
Multi-Sensory Experience
Testing for Errors
Customer Touch Points

Trailer of My CAST Talk
CAST 2014 Team (Lalit Bhramare and Ben Yaroch) helped set up a video trailer of my talk which is a sneak preview into what I am going to talk about in my session. Watch it below.



Token of Thanks

I would like to thank scores of my friends and colleagues who helped me in this journey. The first and foremost in the list is my teacher and colleague Pradeep Soundararajan without whose push, I would have not gotten till here in my UXD study. I would also like to thank Dhanasekar Subramaniam, Bhavana, Ravisuriya, Dheeraj Karanam and David Greenlees who kept sending me information about related topics/articles for several months in a row. I would also like to thank all my team members who have supported me in my search for UXD Nirvana :)

Special Thanks to James Bach, David Greenlees and Lee Copeland for agreeing to review my slide deck and providing their valuable feedback. Without these people's inputs, my talk would not have been what it is right now.

I would also like to thank Don Norman, Jason Pollard, Rob Sabourin (Testing Lessons series) whose work inspired me a lot in last one year.

What Next?

Come to my session on 12th Aug 2014 at KC909 at 4.50 PM
See you there!

You can't make it? Don't lose heart! Watch CAST 2014 Live HERE

Regards,
Pari



Testing for Errors

This article was originally published on passbrains blog HERE.

Great designs transform the way we live and we all act as designers in our own simple ways. When we rearrange objects on our desks, the furniture in our living rooms, and the things we keep in our cars, we are designing. Through our designs, we transform houses into homes, spaces into places and things into belongings. While we may not have any control over the design of the many objects we purchase, we do control what we choose to purchase.

Faulty Designs

A year ago, at least 40 people were killed in a tragic accident involving a private Volvo bus on the Bangalore-Hyderabad National Highway. The incident happened when the bus was reportedly trying to overtake a vehicle at high speed and hit a culvert and caught fire. Before the passengers could realize what had happened, they were charred to death. Investigations revealed that this accident was a result of poor design and absence of safety measures in the bus.

Faulty Designs

The point here is that design can play a key role in making or breaking products. Volvo bus was never designed to hit the culverts nor was it tested for that. However, the driver ended up hitting the culvert. If faulty designs are not tested in multiple contexts like these, it can wreak havoc. Faulty designs are a result of inaccurate mental models perceived by designers and users contrary to system images of products that exist in real. Let’s take a brief look at what mental models are and how they can help us in creating better designs that handle error situations effectively. 


Mental Models

A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems (akin to a personal algorithm) and doing tasks (from Wikipedia).

Mental Models
According to Don Norman, there are three aspects to mental models:
  • Designer’s model: The model present in a designer’s mind
  • User’s model: The model a user develops when he sees/attempts to operate the system
  • System image: The way a system operates, the way it responds, manuals, instructions etc.

Every designer builds a model of the system or product while the user will have a mental model of his own. Any inconsistencies in these models lead to errors. But errors should be easy to detect, they should have minimal consequences, and if possible, their effects should be reversible.

Testing for Errors

While speaking, we can correct ourselves if we stumble or mess up. Products and systems often do not correct themselves because they are only as intelligent as the people who built them. This leads to “slip” which is the most common error – when we intend to do one thing and accidentally do another.

Well-designed products allow us to detect slips through feedbacks. For example, in a delete operation, it is good to ask for a confirmation to verify if the user wants to proceed. If that operation is irrevocable, it is better to warn him of the consequence and take his consent. A general heuristic is to never take away control from the user.

Recoverability of errors is a key aspect in designing products. When errors do occur, the following should happen:
  • Give visibility to the user of what was done
  • Do/Show/Tell the user what went wrong
  • Indicate how the user can reverse unwanted outcome
  • If reversibility is not possible, indicate this to the user
  

Error Messages Coverage

Error messages coverage can be achieved at multiple levels:

Errors-based Scenarios Testing
Testers could get a list of all error messages programmed into the product and design scenarios for each and every error message

Negative Testing 
Negative testing is not the same as error messages testing. In error messages testing you start with known error handling and test it. This is essentially "positive" testing for error handling code. In negative testing, however, you think differently. Negative testing means to "negate" required conditions. In other words, you consider all the things that the programmer/designer requires for his code, then systematically block those conditions. Example: the program needs memory, so reduce memory

Recoverability Testing Matrix
Every error message needs to be tested for Recoverability. 
  • Visibility to the user of what was done
  • Do/Show/Tell them what went wrong
  • How the user can reverse unwanted outcome
  • If reversibility is not possible, indicate to the user what needs to be done next

Some Examples
Failure Usability Heuristic by Ben Simo
Error Elimination Testing by David Greenlees
Feedback Parser by Santhosh Tuppad

** This article is inspired by Don Norman’s book “The Design of Everyday Things”
** Negative testing input was provided by James Bach


Regards,
Pari