22 November, 2014

Managed Crowdtesting - An Augmented Approach to Testing

Testing Industry is going through an ocean of changes. Evolving IT landscape is getting more consumer-driven, open-source and cloud based. With increasing complexity in hardware and software choices, its getting harder and harder to get good testing done on infinite number of platforms, devices, test configurations and a wide variety of user personas.

What is Crowdtesting?

Crowdtesting or Crowdsourced testing is the real-world testing in an on-demand model, delivered through highly skilled & qualified, geographically distributed professionals over a secure private platform.
Crowdtesting solves traditional testing problems as follows:
  • Same brains testing software
  • Fixed/Limited availability of resources
  • Lack of user perspective
  • Lack of fresh ideas

Let me give you a short overview of Managed Crowdtesting which is an extension to how crowdtesting can be done better.

Managed Crowdtesting

A qualified Project Manager, who is typically a proven community leader or a person from the client/the platform company, designs or reviews test strategy, and approves or amends them to cater to client’s specific testing requirements. Each project includes an explanation and access to a forum where bugs and issues are discussed and additional questions can be asked. Testers submit documented bug reports and are rated based on the quality of their reports. The amount the testers earn increases as they report more bugs that are approved by the project manager. The community combines aspects of collaboration and competition, as members work to finding solutions to the stated problem.

Advantages of Crowdtesting

1. Representative scenarios from the real user base
2. Tight feed-back loop with rapid feedback processing and agility
3. Comprehensiveness in use cases, platforms, tools, browsers, testers, etc. that is very hard to replicate in an in-house test lab
4. Cost efficiency
5. Diversity among the pool of testers lends to extensive testing
6. Reduced time to test, time to market and total cost of ownership as most defects can be identified in relatively short time, which leads to significant reductions in maintenance costs

Disadvantages of Crowdtesting

1. Governance efforts around security, exposure and confidentiality when offering a community project to wide user base for testing
2. Project management challenges that stem from the testers’ diverse backgrounds, languages and
experience levels
3. Quality assurance efforts to verify and improve bug reports, identify and eliminate bug duplicates and
false alarms
4. Equity and equality constraints in the reward mechanism with remuneration as a function of the quality of contributions that meets a prescribed minimum standard

Where does Crowdtesting fit best?

Mobile Organizations

With Google, Apple and Microsoft practically giving away their development tools for free, there is a growing developer base creating mobile apps and responsive web sites for android, iOS and Windows platforms. But, it’s easy to underestimate the costs of building and monetizing an app successfully. One way to save costs is to consider crowd testing.
Crowdtesting is most suitable for applications that are user-centric. Users of Mobile and Gaming applications in particular expect the apps to work on thousands of devices from different manufacturers, device sizes, resolutions, network carriers and locations. This calls for not just a group of testers to test on handful of devices and configurations, but for an ocean of users with this kind of diversity.

Growth Stage Startups

With Lean Startup revolution catching up in different parts of the world, startup founders today have gotten smarter by releasing cheaper or free versions of products in beta stage. A few years ago, beta testing happened with a select group that was guarded from the general public. Now, many startups are opening up early versions of applications to users to gather quick and critical feedback. They want to fail faster and learn quicker.
Some applications can be tested in few locations only, some need specific cable connections and network carriers, few others need a specific network connection like 4G LTE or higher, some have needs for specific language users and so on. In such cases, any user might not work. Specific users will be needed in which case an engaged and managed Crowdtesting community comes into play.


Large enterprises can benefit from crowd-sourced testing by simulating a large user base to understand usage patterns and improve on feedback, while ensuring their applications run smoothly on a number of different devices, operating systems, browsers and language versions. Applications with high defect exposure factor post release are good candidates for Crowdtesting.
For e.g. Microsoft released the beta version of its Office 2010 productivity suite, which was downloaded and tested by 9 million people who provided 2 million valuable comments and insights, resulting in substantial product improvements.

Augmenting your test approach

I have heard several businessmen and sales people speak about why offshore testing will work or distributed testing won’t work or how crowdtesting is a magic bullet and so on. With more than a decade of experience in the industry, I can confidently say that there is no magic bullet. Every organization maintains certain ethos with comfortable work culture, talent pool of people and scores of technical debts. Software testing is the least of problems for many organizations, be it 50 years ago, today or even 50 years later. Because, according to customer, software testing consumes money, it doesn’t bring money. In such an overhead situation, selling software testing solutions bundled in different packages to customers as “the most innovative solution of the century” no longer makes sense.
The need of the hour in providing testing solutions is to pitch testing models/solutions as an augmented approach to testing.

Scenario 1 – An organization which employs traditional testing methodologies approaches you for testing

This organization, let’s say, has a mature testing process in place and also has a “Test Center-of- Excellence” for all the testing / QA work that gets done within the organization. How would you add value to them? It’s important to understand the needs of the customer, identify the pain points they are going through as a result of not testing or doing poor testing and pitch a model that fits best for them. Customer might take a couple of test cycles to gauge if that model works well or not.
In this case, if bringing a fresh pair of eyes, then suggesting several new team members in the company/team to initiate testing helps. If it needs to be done on a larger scale, Crowdtesting can be an option. Note that it is not the only option, but one of the options.

Scenario 2 - An organization is looking for diversity in test configurations and devices

A large organization with web or mobile applications accessed from different operating systems, browsers and browser versions, multiple mobile devices, different platforms like Android, iOS, Windows OS, several manufactures, different screen sizes and resolutions. From a cost and time perspective, organizations often find it hard to test on a variety of test configurations. Such a context is suitable for Crowdtesting where a professional testing community works in a Bring Your Own Device (BYOD) model and tests the application, hence giving broader device/platform coverage.

Scenario 3 - An organization wants to solve its regression testing problem

Many legacy applications have a need for regression testing. While new features are conceptualized and implemented, the pain of maintaining existing features from breaking is a big pain. This risk is further aggravated given the number of operating systems, browsers, mobile devices and other test configurations. Regression testing candidates are a great fit for Crowdtesting where the crowd is capable of regressing on a variety of platforms and test configurations within a short period of time.

What does the future hold?

Crowdsourced testing, clearly, has its advantages and limitations. It cannot be considered as a panacea for all testing requirements and the power of the crowd should be diligently employed. The key to great Crowdtesting would be to use it prudently depending on the tactical and strategic needs of the organization that seeks Crowdsourced testing services. It is important for the organization to embrace the correct model, identify the target applications, implement Crowdtesting, explore them for few test cycles; monitor test results and custom make the model to suit their needs.

Why am I talking about Crowdtesting?

I recently joined PASS Technologies, which is into software testing services – Offshore testing and Crowdtesting. While I have been in the Offshore Testing Services for about 11 years now, this is my first experience of how Crowdtesting works. I see a lot of benefits for organizations to adopt Crowdtesting and augment their testing services, be it services based organizations or product based.

What’s in it for a Customer

·         On-Demand Testing
·         Better Test Coverage
·         Faster Test Results
·         Cheaper
·         Scalable Solution

What’s in it for a Tester?

Testers get to “Earn, Learn and Grow” using passbrains platform. Tester benefits include
1. Earning for approved bugs on a per bug payout model
2. Networking with some of the coolest testers who are on our community
3. Recognition as star testers in the community

What are you waiting for? Register on www.passbrains.com as a customer or tester and let us know how passbrains can help you.


Content references include thoughts and ideas from Dieter Spiedel, Mithun Sridharan and Mayank Mittal.

07 October, 2014

Interviewed by A1QA

Hello Readers,

I was recently interviewed by A1QA, a Software Quality Assurance company based out of US and UK for their blog. This is the first time, I have been interviewed on technical aspects of my work to great depth. I cover topics like Mobile Apps Testing, User Experience Testing and Crowdtesting for most part.

I am sharing here, just in case, some of you find it useful.


My Interviews

I have been interviewed by many folks. Yet, nothing is in one place so far. This is a placeholder blog post for all my interviews published so far on the World Wide Web.

September 2014

August 2014

March 2014

July 2012

An interview with Parimala Hariprasad : 1 year @ Moolya

June 2011

Parimala Hariprasad - Part 1 @ IT Files
Parimala Hariprasad - Part 2 @ IT Files
Parimala Hariprasad - Part 3 @ IT Files

May 2011

Interview with Parimala Hariprasad @ Testing Circus

December 2010

Interview with Parimala HariprasadSoftware Test and Performance Collaborative 

Parimala Hariprasad

06 August, 2014

Speaking at CAST 2014 & The Saturday Night Project

Over last 2 years, I have done crazy things. I attended a Design conference, Developer conference, Story Telling workshop by well known Brand Expert and Master Storyteller Ameen Haque, took a design course tutored by Don Norman on 'The Design Of Everyday Things' and also took up some singing classes with my daughter (Oh! I just bray!). I also did several things at work that I have not done before. All along, I went with the flow, picked up any challenge that appeared exciting and enjoyed what I did.

The Saturday Night Project

I spent more than an year studying design and user experience between 2013 and 2014. I was amused by what User Experience Design field had to offer when I led a User Experience testing project at my previous job. The experience has been exhilarating. I learned about design, about applying design concepts to testing and about life itself. All this, happened on saturday nights after I put my kids to sleep. It's been a wonderful journey. These learnings are very special because a lot of effort went into it.
It's time I share these learnings with the world. Which better conference than at CAST 2014!

CAST 2014

CAST is a very special conference for me (apart from Let's Test and Bug DeBug). I have several friends who have attended this conference and told me that I MUST attend this conference, even if it is at my own expenses. Two years ago, I told myself that someday I will present at CAST. My dream is coming true this year. My family thinks I am crazy to be spending a bomb on this trip. For me, it's worth my time, effort and money for the wonderful testers I am going to meet at this conference. I am excited about this conference.

I am presenting a paper "Testing Lessons From The Design Thinking World" at CAST 2014 at New York. You can check out my Abstract HERE.

Date :: 12th August 2014, 4.50 PM
Venue :: KC909, Kimmel Center, New York

Key Highlights of My CAST Talk
Emotions Testing
Multi-Sensory Experience
Testing for Errors
Customer Touch Points

Trailer of My CAST Talk
CAST 2014 Team (Lalit Bhramare and Ben Yaroch) helped set up a video trailer of my talk which is a sneak preview into what I am going to talk about in my session. Watch it below.

Token of Thanks

I would like to thank scores of my friends and colleagues who helped me in this journey. The first and foremost in the list is my teacher and colleague Pradeep Soundararajan without whose push, I would have not gotten till here in my UXD study. I would also like to thank Dhanasekar Subramaniam, Bhavana, Ravisuriya, Dheeraj Karanam and David Greenlees who kept sending me information about related topics/articles for several months in a row. I would also like to thank all my team members who have supported me in my search for UXD Nirvana :)

Special Thanks to James Bach, David Greenlees and Lee Copeland for agreeing to review my slide deck and providing their valuable feedback. Without these people's inputs, my talk would not have been what it is right now.

I would also like to thank Don Norman, Jason Pollard, Rob Sabourin (Testing Lessons series) whose work inspired me a lot in last one year.

What Next?

Come to my session on 12th Aug 2014 at KC909 at 4.50 PM
See you there!

You can't make it? Don't lose heart! Watch CAST 2014 Live HERE


Testing for Errors

This article was originally published on passbrains blog HERE.

Great designs transform the way we live and we all act as designers in our own simple ways. When we rearrange objects on our desks, the furniture in our living rooms, and the things we keep in our cars, we are designing. Through our designs, we transform houses into homes, spaces into places and things into belongings. While we may not have any control over the design of the many objects we purchase, we do control what we choose to purchase.

Faulty Designs

A year ago, at least 40 people were killed in a tragic accident involving a private Volvo bus on the Bangalore-Hyderabad National Highway. The incident happened when the bus was reportedly trying to overtake a vehicle at high speed and hit a culvert and caught fire. Before the passengers could realize what had happened, they were charred to death. Investigations revealed that this accident was a result of poor design and absence of safety measures in the bus.

Faulty Designs

The point here is that design can play a key role in making or breaking products. Volvo bus was never designed to hit the culverts nor was it tested for that. However, the driver ended up hitting the culvert. If faulty designs are not tested in multiple contexts like these, it can wreak havoc. Faulty designs are a result of inaccurate mental models perceived by designers and users contrary to system images of products that exist in real. Let’s take a brief look at what mental models are and how they can help us in creating better designs that handle error situations effectively. 

Mental Models

A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems (akin to a personal algorithm) and doing tasks (from Wikipedia).

Mental Models
According to Don Norman, there are three aspects to mental models:
  • Designer’s model: The model present in a designer’s mind
  • User’s model: The model a user develops when he sees/attempts to operate the system
  • System image: The way a system operates, the way it responds, manuals, instructions etc.

Every designer builds a model of the system or product while the user will have a mental model of his own. Any inconsistencies in these models lead to errors. But errors should be easy to detect, they should have minimal consequences, and if possible, their effects should be reversible.

Testing for Errors

While speaking, we can correct ourselves if we stumble or mess up. Products and systems often do not correct themselves because they are only as intelligent as the people who built them. This leads to “slip” which is the most common error – when we intend to do one thing and accidentally do another.

Well-designed products allow us to detect slips through feedbacks. For example, in a delete operation, it is good to ask for a confirmation to verify if the user wants to proceed. If that operation is irrevocable, it is better to warn him of the consequence and take his consent. A general heuristic is to never take away control from the user.

Recoverability of errors is a key aspect in designing products. When errors do occur, the following should happen:
  • Give visibility to the user of what was done
  • Do/Show/Tell the user what went wrong
  • Indicate how the user can reverse unwanted outcome
  • If reversibility is not possible, indicate this to the user

Error Messages Coverage

Error messages coverage can be achieved at multiple levels:

Errors-based Scenarios Testing
Testers could get a list of all error messages programmed into the product and design scenarios for each and every error message

Negative Testing 
Negative testing is not the same as error messages testing. In error messages testing you start with known error handling and test it. This is essentially "positive" testing for error handling code. In negative testing, however, you think differently. Negative testing means to "negate" required conditions. In other words, you consider all the things that the programmer/designer requires for his code, then systematically block those conditions. Example: the program needs memory, so reduce memory

Recoverability Testing Matrix
Every error message needs to be tested for Recoverability. 
  • Visibility to the user of what was done
  • Do/Show/Tell them what went wrong
  • How the user can reverse unwanted outcome
  • If reversibility is not possible, indicate to the user what needs to be done next

Some Examples
Failure Usability Heuristic by Ben Simo
Error Elimination Testing by David Greenlees
Feedback Parser by Santhosh Tuppad

** This article is inspired by Don Norman’s book “The Design of Everyday Things”
** Negative testing input was provided by James Bach


19 July, 2014

Supportability Experience & Customer Touchpoints Testing

This article was originally published on passbrains blog HERE.

eCommerce industry sector is burgeoning in India with new players starting shop every now and then. But what is it that sets Flipkart, India’s largest online retailer, apart? Flipkart identified 2 key problems in the eCommerce world – delayed delivery and poor customer service. They fixed these problems and made online shopping a delightful and memorable experience for their customers. There is no surprise why Flipkart is being touted as a competitor of Amazon.

Applying this analogy to the testing world, while testing a product, more often than not, testers are focused on which flow the user executes or how the user interface looks. What goes missing is the level of attention and detail to support processes like call verifications, email communication, online chat, service request processes and other services. Does a user receive a welcome email upon joining? Does he get a verification call from the company? Do they call the user if his need is not addressed beyond a certain time? How is a complaint from a user handled? This describes the supportability experience of the product.

Customer Touchpoints

A customer touchpoint describes the interface of a product/service with customers/users before, during and after a transaction (adapted from Wikipedia). Several times, a user is not just unhappy with the product/service, but customer touchpoints too. As a user, think of yourself. How many times did you stop yourself from visiting that supermarket whose attendant didn’t pay attention to your questions? How did you feel when you were the first to enter a pharmacy, yet the pharmacist served customers who came after you? Supportability factors go a long way in defining customer touchpoints and how they feel about the product and the organization in general.

Supportability Factors

What are the factors creating product loyalty? Are they in the product itself? Price? Color? Quality? Quantity? Others? Great user experience is a sum total of all the above listed aspects including supportability factors listed below:
• Calls / SMSes
• Email
• Chat
• Service Requests
• Feedback
• Field Visits

If a user calls customer care and is put on hold for 30 minutes, he would not like it. If he gets 20 messages a day on his phone after buying a product, he would be annoyed. If an email complaint from the user is never acknowledged or responded to, he would never complain again to the organization. He would complain in public over the internet by writing his story of poor experience, with a wider reach and visibility and hence, discouraging other users to buy the same product/service.

If an organization provides a chat channel but a support personnel is not available to support customers, this damages the reputation of the organization. From time to time, users might raise service requests (or tickets) to solve problems on the products they are using. If the tier1 and tier2 analysts don’t know anything about the service requests they handle, it becomes a showdown of sorts. How are user feedbacks handled also lends credibility to the organizations’ care model for the users. When a field service technician from the organization visits the user’s site to fix the product or replace it, user’s experience with the technician goes a long way in defining whether the user will remain loyal or not.

How to measure Customer Touchpoints?

Supportability factors are the key to determine whether customer touchpoints are good or bad, if the products are making a good impact or not or if the customers/users are sticking around or saying goodbye. Products have to be tested for supportability factors to measure users’ experience at several touchpoints. How do testers test for it? There is a way.
Testers can come up with a survey questionnaire that asks questions about different support factors. Let’s take an example of the activation process for a new network connection on a cell phone. User needs to call the call center, provide details to them and get the network activated which usually takes up to 24 hours in India. We could come up with a list of questions like:

Supportability Rating Matrix

These questions might seem unrelated to testing but they are testing related in reality, because if we don’t test the processes in the organization that serve products and product users, excellent products might fade away in a short time period. Apple became the Apple it is today because of the time and money they spent on every little detail associated with the product – be it the product itself or the packaging, the color of the product, support experience and so on. It is becoming increasingly important to create good customer touchpoints because users are no longer looking just for products that serve their needs, but also for engaging experiences while using the product.

What customer touchpoints have you had trouble with? Share your experiences.

28 June, 2014

Multi-Sensory Experience – Five Senses Theory

This article was originally published on passbrains blog HERE.

Jinsop Lee, an industrial designer, believed that great design appeals to all five senses. He called this, the Five Senses Theory. Jinsop also gave a Ted talk on this topic a short while ago. According to him, one can grade any experience on all five senses. For e.g, you can grade eating noodles on sight, smell, touch, taste and sound. Similarly, you can grade your biking experience. Jinsop graded himself on a bunch of adventure experiences like bungee jumping, playing games on two different consoles and others. The five senses graph for Jinsop’s experience on Nintendo Wii against older gaming consoles is displayed below. This clearly tells which gaming console he preferred.

This Five Senses theory can be applied to User Experience Testing too. Users can be asked to review the applications under test and map them on a scale of 1-10 on all five senses. The broader the area covered, the better the experience. Further, this theory can be customized to rate the applications at features level or flow level. The Flow example below describes user’s experience when he was using (learning) the app ‘X’ for the first time.

 The Feature example below shows how the user felt about ‘Seen’ feature on Facebook Chat window.

Several such experiments can be performed with users/testers to understand how different senses respond to different scenarios in applications. Like any other framework driven by individuals, Five senses theory has its limitations too:
  1. It varies from person to person as everyone’s senses may not work the same way.
  2. All senses may not be applicable for all people. For some specially abled people, they may not even be able to hear or see.
  3. It is hard to implement on large sets of users
  4. For some products, all senses may not be applicable. For example, how do you rate this article for taste using this theory?

Despite its drawbacks, Five Senses Theory is a good technique to understand how products can be designed for a multi-sensory experience.

Paper Prototyping

Paper Prototyping is another technique that can be adopted from the testing world. In this technique, a tester wears a designer’s hat and designs prototypes of screens or pages. In human–computer interaction, paper prototyping is a widely used method in the user-centered design process, a process that helps developers to create software that meets the user’s expectations and needs—in this case, especially for designing and testing user interfaces. It is throwaway prototyping and involves creating rough, even hand-sketched, drawings of an interface to use as prototypes, or models, of a design. While paper prototyping seems simple, this method of usability testing can provide a great deal of useful feedback which will result in improved design of products.

How to apply Paper Prototyping to Testing?
Testers take existing applications (Web or Mobile), view page by page or screen by screen and understand the design. They perform basic tests on Design, UI and Business Logic for each design. They also interview a few users about what they think about the design. Based on the information gathered, they re-create screens or pages on pieces of paper and share with the design teams.

How different are these prototypes?
Designers create prototypes already, why re-invent the wheel, isn’t it? Testers gather vast and diverse knowledge overtime by testing multiple products or applications in multiple domains. This knowledge must be put back into the world in all ways possible. For e.g, testers might say, ‘This button must be in this location’ or ‘This UI element must be in this color’ or ‘Remove this UI element as this is redundant’. This feedback is driven by testers’ knowledge of different applications, domains and industries. A step forward from here would be to incorporate above decisions and create fresh prototypes of these applications which can then be reviewed by designers/developers/product owners for further discussions.

An advanced approach towards paper prototyping can be to design two different prototypes, show it to a group of users and gather feedback on which was a better hit with users. Going to stakeholders with such information helps testers build credibility. One might ask, ‘This is not a tester’s job’. But testers are information providers and any information that can be useful towards creating a better product is useful information. Hence, this is a testers’ job.

10 May, 2014

Emotions Testing - An Introduction

Several years ago, Don Norman was on a radio show along with designer Michael Graves. He had just criticized one of Graves’ creations, the “Rooster” teapot, as being pretty to look at, but difficult to use—when a listener called in. The caller owned the Rooster. “I love my teapot,” he said defensively. “When I wake up in the morning and stumble across the kitchen to make my cup of tea, it always makes me smile.” His message seemed to be: “So what if it’s a little difficult to use? It’s so pretty it makes me smile, and first thing in the morning, that’s most important.”  

Rooster Teapot
Rooster Teapot

Cognitive scientists now understand that emotion is a necessary part of life, affecting how we feel, behave, and think. Indeed, emotion makes us smart. Without emotions, our decision-making ability would be impaired. 

My first exposure to Value of Emotions came from Michael Bolton's ideas. His most famous calendar exercise around emotions of a user when he used a calendar app was a great inspiration. I totally forgot about this aspect until a year ago when I started to work on a User Experience testing project. In almost what I believe was a co-incidence, I presented my work at UX India 2013 conference in a 10 min Rapid Fire talk. It was after this talk that the idea for Emotions Testing was born. Over next few months, I have spent several hours fine tuning my thoughts in this area. 

A typical user is like Marilyn Monroe - Highly Demanding, Yet Realistic!

Emotions Testing is evaluating the emotional state of the user before, during and after product is used and identifying the pain points thereof. This technique can be used to evaluate the product against different emotions that a user goes through. Test results can be represented on an Emoticon Dashboard as described below:

Emoticon Dashboard
Emoticon Dashboard

What you see in the picture is a small snapshot of Emoticon Dashboard prepared after testing the product for emotions. Happy emoticon denotes good user experience and a sad emoticon denotes bad user experience. At the end of the report, you can feature a hard hitting feedback from the user highlighting the biggest pain point in the product. You can add more emoticons based on Plutchik’s Wheel of Emotions and customize it for your needs. This way you can directly communicate with the stakeholders on how the product fares on emotions testing and facilitate better decisions.

P.S: I will be writing a series of short blog posts on Emotions Testing in coming few weeks. Stay tuned! My friend and colleague Ravisuriya challenged this post and had several questions around how to sample emotions? Can we really sample them? How can we feel other's emotions? and so forth. I hope to address these questions in upcoming blog post. I would also like to invite the community to share their inputs around Emotions Testing. It will be interesting to take this concept forward by ideating with many testing intellectuals.


09 March, 2014

The Curious Case of Curious Tester - Techie Tuesday Series via YourStory

YourStory is India's leading online platform for Startups and Entrepreneurs. YourStory has blossomed into a powerful ecosystem since its inception making it accessible for The Dream Catchers to reach out to the world through their "World Changing" and "Game Changing Ideas".

Parimala Hariprasad

YourStory interviewed me for their Techie Tuesday series recently.  It's my privilege and honor to be featured on this platform. This has given me an opportunity to reach out to technologists around the world and educate them about testing and the value it brings to products and services. I am excited and happy to be contributing in this small way to this often-overlooked, rarely-accepted wonderful craft of testing.

You can read the interview HERE.

Help spread the word!


My heartfelt thanks to Alok Soni, Shradha Sharma, Varsha Adusumilli, rest of the YourStory team and for Steven M.Smith for this stunning pic.

16 January, 2014

A Decade in Software Testing - Summary

Hello Friends,

Some of you might have read few blog posts from the series, 'A Decade in Software Testing' that I wrote to mark my tenth anniversary in the IT industry. I didn't market this series as much as I usually do because I wrote these for myself, to look back and also look up to the future with regards to where I am heading in testing. Some readers reached out to me and mentioned how it has helped them learn. I thought I might as well collate a summary blog for all the posts so you can look it up in one place, if you find it worth your time.

Decade in Software Testing
Image Credits: The Internet

Here is a summary of all the blog posts.

In Search Of The Master

Writing, A Mental Therapy

My Tryst With Speaking Engagements

Community Gang Wars

International Society for Software Testing (ISST)

Perception About Women - Part I [Real World Problems]

Perception About Women - Part II [Gender Stereotypes, Framing and Hardwired Perceptions]

Perception About Women - Part III [Glass Ceiling, Slotting and Judgment Disposals]

I Finally Found My Master

Happy Reading!


02 January, 2014

A Decade In Software Testing – I Finally Found My Master

I brought down this blog post because it was not my creative best in terms of content and writing style. I'll be updating this space shortly.

I apologize for the inconvenience and thanks for the wait, if you indeed wait!