16 December, 2012

How to Think & How to Test


Every time I picked up Edward De Bono’s book, I fell asleep. My friends kept telling me how their life has changed after reading De Bono. For me, it was a painful journey across the ocean. After failing to read 4 books of De Bono in the past, I picked up ‘Teach Your Child How To Think’ with a ‘Never Say Never’ attitude recently.

What an experience it was! I loved this book, so much that I re-read it to understand some thinking concepts better than when I read it first time. You can’t read some books in life until you are ready for it. ‘Linchpin’ for e.g. was a very interesting book way back in 2010. When I shared this book with a couple of my senior colleagues, this book successfully put them to sleep. So is the story with ‘Are your lights on?’ I thought there was nothing great in this book excluding the tunnel problem. When I re-read it a year ago, I experienced the real ‘Are your lights on?’ moment.

Mindmap of How to Think and How to Test
How to Think and How to Test

James Bach and I discussed studying and books in detail during his stay in Bangalore. He said that if we replace the word ‘Think’ with ‘Test’, everything that De Bono says applies to Testing. It’s so true. 

I created a mind map (above) based on the book ‘Teach Your Child How To Think’ as a placeholder post for my reference. If you find it useful, you must read the book. You can even download this blogpost HERE.

Regards,
Parimala Hariprasad


20 November, 2012

Test Ed - Rise of thinking Indian Tester

Test Ed - a Tester's Conference


Testing Education
I have interviewed a bunch of fresh grads in recent months. When I ask few of them, "You seem to know Java, what if we need you to code in C sharp", they effortlessly answer that it's organization's responsibility to train them. That is how some of us think despite being in the industry for many years. If an individual's education was employer's responsibility, we wouldn't have witnessed the rise of great men and women  in any field. It's an individual's responsibility to take care of his/her education.

No knowledge is wasteful ever. Knowledge gained can be used at anytime. In fact, that is the biggest investment which is least talked about in the World of Money. Yet, we weight knowledge against money and put knowledge down on the weighing scale.



Test Ed
Test Ed is a Tester's Conference organized by the testers, for the testers. We need more and more testers to rise up to the challenge of creating great testing talent pool in India. To accomplish that, Moolya has initiated this conference as a first small step.

Test Ed - Low Cost conference
We are aware that some of us can't afford to attend a 8k or a 10k conference for 1-2 days. We are aware of the fact that not everyone in the organization can be sent to the conference. We are also aware that you need to see value for your money if you happen to visit this conference. How do we solve so many problems at one shot? In Pradeep's own words, "Our goal is that a student and wanna-be tester should be able to afford this. So we came up with 500 rupees per person + 12.36% service tax". Its probably the cost of evening snacks if you walk up to any mall in Bangalore. You decide whether you want to be a part of it.

Test Ed - High Value conference
An opportunity to meet James Bach, Pradeep Soundararajan and Rahul Verma
An opportunity to meet some of the brightest minds of testing industry
An opportunity to network with fellow testers
And an opportunity to educate yourself on testing - straight from some of the greatest minds of the industry

And more than all of these, you are going to meet passionate testers who are as passionate as you, facing some problems you may have solved, have solutions for some of your challenges and be of value to your passion called Testing.

If you want to attend, click HERE to register. For more details on the conference, Visit Test Ed website.

See you @ Test Ed.

Regards,
Parimala Hariprasad

16 October, 2012

Free E-book :: Web Accessibility Testing Heuristics

Hello Dear Readers,





My first FREE E-book on "Web Accessibility Testing Heuristics" is available for download HERE.Santhosh graciously hosted this document on his server. Other sources to download from are below:

Scribd
Slideshare

This E-book has a list of heuristics to look for while testing for accessibility of websites. As the name suggests, it is a heuristic list and nowhere close to a robust list. Feel free to take a look and post your thoughts!

Acknowledgement
I am thankful to Santhosh Tuppad for putting the early thoughts of Accessibility into my mind and my colleagues at Moolya who spoke about Accessibility in one form or another at different points in time. I am also thankful to Mohit Verma and Santhosh Tuppad for helping with reviews.

Happy Testing!
Regards,
Parimala Hariprasad

24 September, 2012

Interview with Santhosh Tuppad on Usability


 Do you like great interiors and ambience at hotels or restaurants you visit? One could correlate food to functional and interiors and ambience adding up to usability which adds value to the dining experience.

Here is an interview with Santhosh Tuppad who emphasizes on usability being crucial testing for any software that is designed and developed for people around the world or people of specific location or any living being or could be anything. Usability is a deeper study and testers who say, “Oh, usability is to see if application is user friendly”, they are mostly faking it. The hard truth on the contrary is, it's not easy and has never been easy.  I got Santhosh to answer a few questions. Take a look.


Why do you call usability as a crucial ingredient for software?


It is because, you want to;
a.      Win more customers
b.      Give the best to your customers
c.       Win over your competitors
d.      Entice your competitors customers as your customers
e.      Give better user experience for your customers
f.        Want to make more business and increase your revenues
g.      Make people spread a word about your software
h.      Ultimately, help do GREAT BUSINESS!



My management is not serious about it or developers just reject the usability bugs I report?


This is a classical problem that most of the testers find. Here are some tips that might make them serious about it. Even if you could change ones thinking, then you have made a better impact).

  • Build good rapport with your manager and make him / her understand about usability and its importance.
  • Conduct a meeting with your team members and speak to them about usability and how to report them (Bug Advocacy).
  • Even if developers reject your bugs, there should be a genuine reason and not just like that. Keep reporting even if they reject. All the testers in your team should report huge number of usability bugs and then I bet they will not reject them or higher management looks into them because so many bugs are being rejected. At least, you will not be pointed when customer reports the same usability bug which you wanted to report but were biased that the developer may reject it and hence did not report it.


What books do you read / how do you practice usability?


  • Psychology reference book
  • http://useit.com/ and http://boxesandarrows.com/ are good ones to refer (Be careful, you need not agree with everything these people are telling. There need to be your own thinking, that’s when you could make things better rather than following someone else’s ideas blindly. Here, you will see how I opposed the idea of Jakob Nielsen – http://useit.com/ however; I refer to it for some cool study).
  • I design websites (Well, buggy ones)
  • I interact with UI / UX designers
  • I attend conferences on UI / UX
  • I keep myself updated with new technologies which could add value to software when implemented or when enhanced
  • I co-related usability with the things in day to day life. I do exercises with the things around me about usability.


What are the important skills you think you posess for getting better at Usability testing


Analyzing “What could be good” for end-users. Most of the times, I have succeeded when I analysed things over time. However, there is bad side to it when you just start getting biased by your thoughts of usability rather than seeing it from customers’ point of view.  I have seen testers saying, “You got to put yourself in customers shoes” and then test for usability. Well, its easy to say; but too challenging to do it. Your customers might range from -- millions, billions, trillions of end-users with different brains, different thinking abilities and different ways of using software. Now, you see, “Wow, this is really challenging”. If you still do not feel it, then either you do not want to accept it or you do not understand it. Continue to do your functional testing. Most of the organizations are happy with it.

I have cultivated a practice of arguing with my own thoughts about usability. I have seen at times, when my colleagues just agreed whatever I said but, I go back and think on those lines. Finally, I get better approach than what I really said. So, that’s the thinking power. Do not stop just because people stopped arguing with you. You argue with your own thoughts or you say to yourself, there is something better I can do rather than sticking to my old ideas.



Do you conduct workshops or talks or would be interested in guiding aspiring testers?


I am not ready to conduct workshops on usability yet. In future, I will be.

With respect to talks, I am interested to explore opportunities in formats like half-day seminars or 2 hour talks on usability.

With respect to mentoring or guiding someone, I feel I am running out of my bandwidth as there are many people who are approaching me for security testing. However, I could support you people over e-mails but, do not expect quick replies. I would at least take 1 week however; you might receive responses at the earliest in few hours or minutes as well.


Related articles on Usability




About Santhosh Tuppad

Santhosh Tuppad is the Co-founder & Senior Tester of Moolya Software Testing Private Limited (www.moolya.com). He also recently won the uTest Top Tester of the Year 2010 apart from winning several testing competitions from uTest and Zappers. Santhosh specializes in the exploratory testing approach and his core interests are security, usability and accessibility amidst other quality criteria. Santhosh loves writing and he has a blog at: http://tuppad.com/blog/. He has also authored several articles and crash courses. He attends conferences and confers with testers he meets. Santhosh is known for testing skills and if you are passionate in testing, feel free to contact him at: Twitter: santhoshst | Skype: santhosh.s.tuppad | Santhosh.Tuppad@gmail.com

20 September, 2012

Why testing isn't good? - Inspection Vs. Prevention

Once upon a time in Testerland, there lived an enthusiastic tester called Ether (Enthusiastic Tester). Ether was a well known tester at the time. He had great rapport with Techland, Devland, Businessland, Salesland to name a few. His knowledge and talent was the talk of the land for every release that went into trouble just before the D-day. His reputation was such that a new phase called 'Ether' was introduced for every release. This meant Ether had to bless every feature before his organization Org released it to the market.

Ether was happy with the way his life was going. He had been getting a pay hike every six months and a promotion every two years. Not to mention the heap of awards showered on him on a quarterly basis.

And then came the storm
Suddenly, Org wanted several "Ethers" in Testerland. They wanted to create more Ethers, train them and do what it takes to replicate many such Ethers. After all, one Ether could not succeed in blessing all releases 24/7. He was human and he needed to avoid burning out. Organization thought the same. Thankfully.

Org lined up all its concerns in testing with Test-eagle, the highest authority in Testing. Test-eagle had set up the first team in Org which had now grown to about 10,000 testers in Testerland alone. He was a key guy responsible for all the innovative stuff that had happened in Testerland so far. It was only obvious he had to get involved in this mission.

Test-eagle met up with top executives in Testersland and heard them out. He decided that a few decisions had to be made. His questions were pretty simple:
  1. Why are more testers not transforming like Ethers?
  2. What is stopping Org from building more Ethers?
He thought for a week's time and presented a report to his leadership team. His proposal was okayed in a day's time. No questions asked. After all, Test-eagle knew exactly what he was doing.

Test-eagle lined up all the managers in Testerland, Devland, Techland and others and passed a mandate on the following:
  1. A new role 'Lead tester' was created who report to Quackle, who headed QCOE - Quality Center of Excellence 
  2. For every two testers, there will be a lead tester
  3. Lead tester will supervise tester's work on a day to day basis
  4. Lead tester will come up with metrics based on tester's work
  5. Lead tester will present the findings to the leadership once every Quarter
Lead testers were the new Quality Inspectors
Test-eagle was to review the results after 1 month. Interestingly, Test-eagle didn't have an inkling of what was happening with testers in Testerland although he passed the "Inspection Bill" and expected it to rock as always.

One month later...........
Test-eagle was welcomed with a pile of results and findings from lead testers who were proud of their findings. Some key observations were listed as below:
  1. Ether had left Org for greener pastures
  2. "Ethers in the making" had left Org for greener pastures
  3. Testers in close proximity to Ether in talent and knowledge also left ...... for greener pastures of course!
  4. Testers who were doing well on an average stopped doing so. Motivation had dropped
  5. Testers who were doing what they were told to do no longer did it; instead stopped doing anything at all. Productivity was as low as nothing
  6. Testers who didn't do anything meaningful continued to do the same
  7. Customer started complaining that even basic requirements were not functioning properly
Quality lay breathless at Org's Doorstep

What happened?
  1. Lead testers brutally pin pointed how testers can be better in what they do.
  2. Lead testers wanted the testers to follow 50% scripted approach and 50% exploratory approach in addition to 20% of time/effort contribution to Automation
  3. Lead testers wanted to review every bug before it was reported in the bug tracking system
  4. Lead testers questioned the severity of bugs
  5. Lead testers rejected several valid bugs because they didn't think that those were bugs
  6. Lead testers questioned testers if they missed any bugs
  7. Lead testers questioned testers if they thought testers tested very little on a particular day
Lead testers forced the testers to TRY HARDER and DO BETTER which testers were already doing. Something more had to be done. No one knew what that "something more" was.

What went wrong?
Test-eagle ignored the following which was a sub-set of problems that existed:
  1. Test environments were never stable. Release management team hardly took onus
  2. Test environments if available needed lot of troubleshooting from testers before it worked
  3. Checking in code changes into environments took days at a stretch
  4. Code drops to testing teams never happened on time. There was always a 2 week delay 
  5. If code drops happened on time, then a very small code mass came for testing. For e.g, if there were say 10 features, only 1 feature would have been available in testing
  6. Code drops coming towards release deadlines contained massive number of features
  7. Availability of data for testing was always a problem. Data team always forced testers to compromise testing with workarounds. Data team provided invalid or nonsensical data at times. A laptop for example would cost only 2 Baht
  8. Infrastructure in testing environments were never provided on time. If the production environment needed 3 load balancers, testers would be advised to test without the load balancer and certify that the code works fine and doesn't break anything new. Whether the code worked with 3 load balancers in production was a question not to be asked
  9. And a lot more..........

What's wrong with the inspection approach? 
Inspection chokes! How will you feel if your mom or dad just kept staring at you without saying a thing to you while you do your household chores. Same thing had  happened to Testerland testers.


There was a need for a CHANGE IN THE SYSTEM in totality. What Test-eagle did was to transform just the "Testing System" converting people in other systems to Supervisors who oversaw testing work and mocked at it. Test-eagle put complete responsibility of quality with testing team and let others go scot free.
Test-eagle failed to understand that Quality is everyone's job, but management's responsibility. 
Test-eagle was highly de-motivated. What on earth failed his plan on which he had invested his blood and sweat on. He had to calm down. He had to come down to tester's level. He had to prevent some or all of the tester's problems listed above to be able to improve the quality of testers. He had to fix a few problems that were built into the system even before blaming the 'Testing system'. And he did. As a first step, he figured he didn't need Lead testers. Not that he fired them. He still kept them, but after stripping off their Inspector roles. He made them part of the system.

Inspection Vs. Prevention mind-set
Test-eagle moved away from an Inspection mind-set to a Prevention mind-set. He thought that if he fixed a lot more problems in many other non-testing teams, testing teams could do better. Testers could do better. Ethers could come back to Org without a second thought. He looked at some of the key problems that Org was facing for a long time. He identified top five problems to begin with. He summoned the respective stream leaders and discussed problems in detail. A brainstorming session followed. Few action items were noted. And the group was off to execute.

Inspection vs. Prevention
Courtesy: www.docstoc.com
The first improvement was on the Test Environment challenge. The release manager was answerable for every minute of test environment downtime. He took the onus of fixing the problems right away, of course after being reprimanded by Test-eagle (Sometimes, position power is the only power that works.) After a while, testers were happy. They no longer suffered long hours of environment downtime. They could test a lot more in the same time that was scheduled for testing. They didn't have to waste time on mindless things.

Test-eagle took each stream to task and fixed the problems one after the other. Slowly, testing team was empowered. Empowered to TEST BETTER!

*This post is inspired by John Guaspari's book 'I Know It When I See It'.

Regards,
Parimala Hariprasad



22 August, 2012

Guesstimate - An estimation challenge




Suppose I ask you to test category 'Books' on www.flipkart.com.

Let's take the task of testing 'Send email' feature. As a project lead, I might say it takes 5 man days to test this feature. Suppose that a tester needs to pick this task and complete it in 5 days time.

In above scenario, 5 days time is a rough estimate. An estimate is a guideline, not a deadline. The project lead would have arrived at number 5 due to one of the two reasons below:
1. He just picked a random number out of the time available for completing testing
2. He knew the product in and out and made this guess

What factors the project lead may not have considered about the feature and the tester while estimating?
a. Time required to explore and learn the feature by the tester
b. Time required for ideation and test design for that feature
c. Test data avaialability for sending emails. For eg, Anti-virus testing - Procuring Eicar files as attachments while sending emails
d.




18 July, 2012

First Year AWESOMENESS @ Moolya

It's been an year already since I joined a Sexy Startup called "Moolya". It was time for an interview as part of the company tradition. This interview is very special to me as it comes from people who I not just work with, but also have profound respect for. My interview is now posted on Moolya blog HERE


Below is an appreciation of my work from Pradeep who often calls me a "Damager" :-). I am humbled to the core!


************************************************


Padmasree Warrior, CTO of Cisco is one among the many who made it big and is noticeable. So is Kiran Mazumdar Shaw of Biocon. If you ask me who would it be in testing, I would bet it is Parimala from India. Talking is easy, walking through the talk is difficult. Parimala makes walking the talk look easy. She has a Parimala way of working and I believe she has a large influence on people working with her. Not all people who are touched by her blog, so some of what Parimala does remains hidden and secret. Through this interview with her, I am trying to see if I can bring to the world what she does and how she does that. What has Moolya been able to offer to this great woman, who doesn’t actually need Moolya to do great testing but Moolya definitely needs her to remain delivering moolya.
When Mohan Panguluri joined us as the CEO, he was wondering how many appreciation emails does she get after looking at a stream of appreciation emails from our customer and their teams she works with.


 Look at some of the amazing things she said in this interview. The one of empathizing with people was a gem of a thought and writing. To work with Parimala, I have to raise my standards, otherwise I start to look bad to myself. Even if I do not raise my standard, Parimala would respect me for who I am. Not because it is the so called Pradeep Soundararajan but because she treats everybody the same and respects people. She teaches us humility.


Salute!
************************************************

 Special Thanks to Moolya and Moolyavans for making my stay here so Awesome. Looking forward to many more years ahead.


Purely a marketing blog post. Jokes apart, this means a lot to me!
Regards,
Damager Pari

30 June, 2012

CAPTCHA and Customer Context

In my previous blog post, my friend and colleague Santhosh Tuppad asks, “What are the different contexts in which customers might want to use CAPTCHA?”

Captcha and customers always makes for a difficult discussion. As a customer myself, I am annoyed at Captchas that are hard to read. If I had an option, I would obviously opt out as it reduces one step :-). However, as a tester, I would be concerned about the implications of not opting for a captcha. Here are a few contexts where customers might want to use Captcha. Some of these could be repetitive from my previous blog post - do bear with me.


Image credits : www.onlineaspect.com

Where would a customer prefer captchas?
Sign up
Suppose I run a testing forum on my website. This requires users to sign up to be part of the forums. What if a script is written to register infinitely? I would like a captcha on the registration forms to isolate fake accounts from genuine users. Unlimited Sign ups without appropriate verification could become a daunting challenge for website owners to solve.

Login page
If a user has keyed in wrong password 3 times consecutively, present a Captcha to this user to prove if he is a human or not :-). I have assumed that user’s account is locked out after 5 unsuccessful accounts. Some applications limit it to 3. Captcha along with account lockout policy helps curb brute force attacks early on.

Forgot password page
If an attacker tries to hack into another user’s account using Forgot Password feature, he could write a script for the same. I would prefer a captcha on Forgot password page as a second level check so that scripts are not capable of bypassing the same. Note that first level check for Forgot Password feature is to ask the user for username/email address. Also note that captcha database must be robust enough to prevent attackers from exhausting that list quickly. Captcha usage on this form prevents email spamming (could be one email or mass emails at a time).

Forms (Comments, Feedback, Suggestions, Sales queries etc)
Comments

As a blog owner, I get bogged down with all kinds of spam comments. I prefer a captcha to be presented to users who want to add genuine comments and keep spammers away.

Sales queries
Many startups can’t afford a 24/7 toll free line to answer customer queries. They place a nice little page on Contact Us section of their websites to invite sales queries. If some thirsty spam bots find such forms, it’s crocodile festival ;)

Feedback/Suggestions
I prefer a captcha on feedback or suggestion forms to prevent random feedback from spammers or spam bots.

Discussion forums

Users who are willing to submit questions on forums that don’t mandate users to register are easiest targets for spamming. Presenting captchas while user posts a question is a good practice.

Facebooking
You post an update or upload a photo and you are presented with a captcha. There are two facets to captcha usage here.
1. As a user, if I am presented with a captcha while updating the status or uploading a photo, which is a one time activity, I would do it no matter how annoyed I am about this prompting. If I think of my account in an attacker’s hand, I would be worried about spam bots. Though I acknowledge the fact that spamming status updates can be done manually, it may not be done at a speed that a script would accomplish.
2. As a business woman selling something on facebook, I would hate a captcha. Suppose that I have a facebook account for my retail business where I am needed to update my status or post pictures quite often. Providing captcha details could be very annoying, yet unavoidable in this context. This could affect user experience.
Note: In either of the cases presented above, it’s better to have captchas.

Customer preference of a Captcha
Some customers who prefer captchas may have security aspects in their mind while many other think it spoils their user experience. Above listed contexts are a few where I would prefer captchas. If you have a different view, you can always spam me in the comments section as I don't have a captcha ;)

Take it with a pinch of salt
As users we think that if captchas are in place, we can bypass spam. Captchas are not 100% spam proof. Neither are they 100% secure. In today’s world where most of the captcha implementations are easily broken, its hard to trust captchas completely. Some friends suggest that re-Captcha is not broken yet. Well, it could just be a matter of time ;)

Addendum
Santhosh Tuppad has written a cool blog on Testing Captchas. Have a feast.

Regards,
Parimala Hariprasad

15 June, 2012

Gotcha CAPTCHA?

A CAPTCHA is an anti-spam program that generates tests that humans can pass, but computers can’t. CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart originally coined in 2000 by Luis von Ahn, Manuel Blum, Nicholas Hopper and John Langford of Carnegie Mellon University. Captcha is typically used where the application needs to know that it’s a human on the other side and not an automated script or another computer(which can’t think for itself).

Imagine going to the public library and picking a century old book. If this book has to be converted to an e-book, the computer must be capable of reading the book. A machine unlike a human cannot identify a real bug, some dirt collected over the years, missing letters, wrongly spelled words and others. i.e., computers cannot read distorted text. This is the funda behind Captchas.

Types of CAPTCHA

Text

Human is challenged with a text captcha with some amount of back ground noise.


Image

An image with or without text is presented to the user and a question is asked based on it. For E.g., “Which is the bird presented in the picture below?”. This captcha could even take the form of a graphical puzzle which the user must solve.


Q and A

User is presented with a question like “How many hands does a crow have?”. If this type of captcha has a limited set of questions and answers, it could be broken easily.

Math puzzle

A mathematical puzzle such as 2+3=? is presented to the user who is expected to complete the math problem.

 

Game captcha


 

Why do we need CAPTCHA?

  • To prevent spammers from creating fake accounts on websites [Registration page]
  • To prevent unauthorized users from accessing features that help hack email accounts and/or spam user’s inbox [Forgot Password’]
  • To prevent automated software from participating in online activities by impersonating a human [Online polls/surveys]
  • To prevent spammers from commenting on websites [Comment forms] 

 

Bypassing CAPTCHAS

Bypassing Humans

Hire humans who can enter values in captcha fields when needed. This means that a script can be written to register or spam on the web.

Automated scripts

A spammer patiently downloads all the captchas on the website over a period of time and builds a database. He could in turn write a script to compare captcha images and instruct the script to key in appropriate values into the captcha fields based on comparison operation. These are situations where Q and A and puzzle related captchas come in handy to considerable extent. However, it’s important to note that these captchas could be programmable as well. 
Using Free OCR, spammers can decode text based captchas. Spammers can use one such tool to build a robust captcha database and use them to bypass captchas.

Using session ids

Most captchas may not destroy the session ids when correct captchas are keyed in. These session id’s can be exploited with the help of a few lines of code and bypassed easily.

Eliminating the CAPTCHA element

If captchas are validated at client side and not on the server, users can remove or eliminate the captcha element using an add-on like Fire bug or knocking off the captcha code in HTML source code. Easy bait, isn't it?

CAPTCHAs and Usability

* These days many websites present captchas that are difficult for humans to decode, forget about machines :-). Background noise in captcha (anything other than the text that makes it difficult to read the text) must be optimal enough making it easier for humans and difficult for computers.
* If users are unable to recognize a particular captcha, there needs to be a provision to re-generate a new captcha. In general, every page refresh displays a new captcha to the user. 
* Providing audio captcha supplements captcha functionality by allowing the user to listen to the audio in case he cannot decrypt the captcha. 
* Suppose a user is on a registration page and has already entered captcha information. For some reason, form submission failed. Here, user is presented with a new captcha for a second time. If the user has already proved that he is a human by keying in captcha data, why present it repeatedly to the user. 
* Look and feel of captcha elements with respect to the web page background color and images is important. Keeping captchas at the end of the page where they are hardly noticeable becomes a problem for users who realise that there is a captcha after page validation fails on that page. 


There are some applications where the same captcha is presented even if user enters wrong data instead of presenting a new captcha post page refresh (LOL).


 CAPTCHA and Accessibility

* Displaying captcha with a lot of background noise becomes a problem for differently-abled users. For e.g. a visually impaired user cannot see what’s on the screen. It needs to be read out loud. This requires audio support and hence the need for an audio captcha.
* Accessibility functions need to be built into the web application for screen reader tools to read captcha elements and invoke an audio captcha.
* Few applications display captchas that are hard to decode by humans themselves. This often poses an accessibility problem for all groups of users. 
* Dyslexic people could have problems with captchas too [Just wondering]

CAPTCHA and Security

A small database of captchas is easy to collect and crack. If a website displays about 20 captchas on a website in a random fashion, a regular user on that website can figure out all of those, write a little script and crack them. Below is a snippet from one of my blog posts on how absences of captcha can impact security.

* No Captcha on Registration forms
This is a wholesome option. I recently needed 50 valid email accounts to be created for testing a website. All I did is write a simple automated script using iMacros (FREE add-on) for account registration and creation. All I had to do is activate these email accounts manually (note that this step could have been automated too). At the end of the testing effort, these accounts were discarded. Now, if you are a company that allowed 50 email accounts for a single imposter, you lose an awful lot of revenue. Is this what you want? If you had a captcha in place, my script would have failed as captcha expects different data at each times which needs human intervention. Building a captcha on registration forms is a good design idea to snub away not-so-serious users or spammers.

* No Captcha on Forgot Password forms
If there is no captcha on Forgot Password form, I would possibly write a script to feed in umpteen number of valid email addresses to the Forgot Password page. Why would any user do that? He could be a cranky user. He might draw fun in irritating fellow users. He might be an unethical hacker. He doesn’t know what to do with his life!

* No Captcha on Comments forms on websites and blogs
As a blogger, I get a lot of spam comments for products that I don’t need.. I wish spammers took segmentation and targeting seriously and routed their ads to appropriate audience :-). Without a captcha in place, spammers can write easy little scripts to post these *free ads* in the comments section of any website. Having a captcha would require human intervention which in turn might block spam to a reasonable extent. 


Addendum on 21st June 2012
Added Game captcha. 


Regards,
Parimala Hariprasad (yeah, changed my surname. Do remember ;))



16 April, 2012

18 Testing Challenges from Santhosh Tuppad - Part II

This is Part II of my response to Santhosh Challenge. Part I is HERE
 

8. If you are the solution architect for a retail website which has to be developed; what kind of questions would you ask with respect to “Scalability” purpose with respect to “Technology” being used for the website?

Here are a few questions on Scalability w.r.t Technology:
1. Can Technology take the current load of customers visiting the website?
2. Can technology handle an increasing load of customers per unit of time?
3. What is the maximum threshold load the technology can handle?
4. What is the maximum load under which the technology performs or renders itself optimally without affecting any degradation in website usage?
5. Is the technology easily customizable with additional infrastructure?
6. Does technology have any scalability limitations per se?
7. Does technology blend itself with the programming languages used for coding the website?
8. Does technology blend well with the hardware, software and middleware used?
9. Is technology portable if there is a need to build the website on multiple platforms over a period of time?

Any system is meant to be scalable if it continues to accept large amount of load and operate normally without adding additional configuration costs.

Let me assume to work for one of the biggest retail giants in the world with retail business spread across multiple continents. Let’s understand what they need as part of basic infrastructure:

Hardware
Data Warehouses
Content Management Systems
Mainframe and Unix Servers for running batch jobs at regular intervals
Middle ware for data transfer between multiple components
Marketing management tools
Email management tools
Data storage devices
Multi-processor distributed systems

Software

Server operating systems
Databases
Customer facing applications
Marketing applications
Marketing management tools
Email management tools

In a typical scenario, if a performance problem surfaces, what does the development team do? They increase the infrastructure to mitigate the problem. This is an easy way to temporarily shoo away the performance problem. Over a period of time, if the management decides to keep contribution margin of the product intact, adding additional infrastructure will become a problem.

What has scalability to do with Infrastructure when the question explicitly asks about technology scalability?
I strongly believe that technology and infrastructure must go hand in hand to be able to make any software solution scalable. If technology is scalable, but the infrastructure setup is pretty bad, there is a problem. If infrastructure is on par as expected, but technology isn’t scalable, that is a problem too.

Let’s consider scalability in a web service scenario. If we had to scale the webservice usage to a large set of customers over a period of time, the technology must suitably allow it. If I said, I’ll write a simple batch script for resource allocation in above scenario, it may or may not be scalable. if same solution had to be written using a framework belonging to solid programming language, may be there is more hope.

In short, scalability is good only when there is a right mix of technology and infrastructure. Both cannot be mutually exclusive to make any system scalable.

NOTE: I am not happy with my answer on this one. I liked the one written by Markus Gartner.


9. How do you think “Deactivate Account” should work functionally keeping in mind about “Usability” & “Security” quality criteria?

I have been unable to close one of my bank accounts as they have a lengthy de-activation process - an application form, returning the security device and remaining check leaves if any. Initially, I was annoyed as I didn’t want to spend time going to the bank and they were not taking my verbal confirmation on phone seriously. After a while, I realised that if de-activation was so simple, I could de-activate anyone’s account if I knew their customer number. It’s important to keep de-activation process as secure and fool-proof as possible.

If a user decides to de-activate his account on the website, it can be done using following steps:

Step 1: Identify the user
Identify if the user has a valid account by asking for username/email address and validating accordingly

Step 2: Authenticate the user
Authenticate the user by asking the user to reveal some information that is unique to that user. This way, we hope that one could be doubly sure that the right account is de-activated. Having a captcha in this step prevents bots from miss-using this feature.

Step 3: De-activation process
Initiate de-activation if user provides valid information by sending an email with a de-activation link. Note that this hyperlink must be limited to one time use.

Step 4: Confirmation of de-activation process
User needs to click on deactivation link in the email to de-activate the account

The above steps are reasonably secure from security point of view as well as usability point of view as the steps are simple and easy to follow.






10. For every registration, there is an e-mail sent with activation link. Once this activation link is used account is activated and a “Welcome E-mail” is sent to the end-users e-mail inbox. Now, list down the test ideas which could result in spamming if specific tests are not done.

1. Click on the activation hyperlink received in email inbox multiple times. If this action was tied to another action where an email is sent to the user ‘Welcoming the user’, then each time user clicked on this hyperlink, an email would be sent to the user

2. Once the activation hyperlink opens and confirms that activation has succeeded, refresh this page using “Refresh/Reload” option. If the refresh of this page was tied to an action where an email is sent to the user ‘Welcoming the user’, then user could get spammed

3. Once the activation hyperlink opens and confirms that activation has succeeded, refresh this page using ‘Reload Every’ add-on to spam the user. This is a variation of Step 2 above.

4. If the registration page does not check for already registered email addresses, registration can be done multiple times using same email address, hence spamming the user using this option. This step when combined with 1, 2 and 3 above can be used to spam the user to a large extent

5. In the registration page, enter same email address multiple times separated by commas. Any loophole in the application could consider these email addresses and send same activation hyperlink to same email address multiple times. Note that this scenario is not directly linked to the above question.


11. In what different ways can you use “Tamper Data” add-on from “Mozilla Firefox” web browser? If you have not used it till date then how about exploring it and using it; then you can share your experience here.

1. Viewing client requests - Tamper data can be used to view all requests sent from client to server
2. Parameter tampering - Can be used to tamper with input parameters before submitting them to server
3. Security Testing - Can be used to tamper http methods (headers and parameters) and used to security test client requests to servers
4. Cookies/Session ids - Can be used to view and tamper with cookies / session ids and hijack other user sessions

Pros
1. Tamper Data is an add-on that is accessible wherever browsers are installed
2. It’s easy to setup and use if Firefox browser is installed

Cons
1. Tamper Data is not as powerful as Burp Suite :-)


12. Application is being launched in a month from now and management has decided not to test for “Usability” or there are no testers in the team who can perform it and it is a web application. What is your take on this?

Usability testing is treated like a step child in most organizations. Usability testing often becomes a last minute dump task which could be done if and only if so called "functional testing" is complete. We all know that complete testing is exhaustive and impossible. As a result, there may be little or no time for usability testing.

If I am part of this project, I would identify a tester or a group of testers who I think have a decent knowledge of usability. I would help them learn about usability heuristics and become well-versed in these areas. Another good learning method would be to provide sample websites to test for usability, evaluate their reports and provide feedback. Based on this, I would hope they'll do a good job testing the web application above.

As I write this, I am aware that some people reading this will think, "Where is the time to do all of the above when there is hardly any time left to test for usability?” I completely empathize with such people. I have been there and done that. In such projects, it makes sense to provide "On the Job" training. It's important to identify at least one person who has good understanding of usability in general and usability testing in particular. **This person must set up a usability testing team and do one or more of the following:

Hallway Testing
Employ a few people walking down the hallway to test websites. Hand pick users from different walks of life and find out what irritates them as they use the websites.

Recorded Surveys
Record the proceedings as user uses the website and talks about pain points. Show the findings to web designers and work on how websites can be designed differently to ease those pain points.

Emphasis on Feelings
Magnifying user’s feelings (good and bad) as they use the websites helps gauge what makes good websites good and bad websites bad. Users’ feelings are fragile and it’s important for websites to handle these feelings with care.

**Adapted from my article "My Story of Website Usability" published in January 2012 edition of Testing Circus magazine.



13. Share your experience wherein; the developer did not accept security vulnerability and you did great bug advocacy to prove that it is a bug and finally it was fixed. Even if it was not fixed then please let me know about what was the bug and how did you do bug advocacy without revealing the application / company details.

Security Vulnerability
I worked on a multi-component project where each component was owned by different teams. Our team owned component A and another team owned component B. Component A was responsible for storing confidential information at a single location. This data would be requested by multiple consumers and processed accordingly. Since this was a closely watched system with restricted privileges to users, no security measures were taken on this component. The problem arrived when different components had to request for data from component A. Component B, being the first in line requested for data. Component A gave away the data as the requestor on component B was a trusted guy. However this data request and response wasn’t secured:
1. Data received at component B had a local copy of confidential data even though it was not supposed to store any of this data.

Team Ownership
The security vulnerability mentioned above was a bigger challenge given the fact that multiplicity of teams were involved with tons of ego floating around, “This is not my component’s problem, it’s yours” to “Your implementation sucks”. People involved hardly got into the details of the problem and the impact it could have if these components were shipped as is.

Bug Advocacy
As a tester, it was important for me to understand the impact of above problem even before I could advocate fixing of these problems. I got in touch with a couple subject matter experts who have worked on similar projects and asked for inputs. I initiated a dialog with a couple security architecture teams to understand the implications. Around the same time, I gathered feedback about why Team A and Team B have to work together to fix this problem and avoid working in silos. Based on all the information I had, I convened a meeting and discussed these problems with all team owners. Eventually, the bug was accepted as a problem and fixed accordingly.


14. What do you have in your tester’s toolkit? Name at least 10 such tools or utilities. Please do not list like QTP, Load Runner, Silk Test and such things. Something which you have discovered (Example: Process Explorer from SysInternals) on your own or from your colleague. If you can also share how you use it then it would be fantastic.

1. Notepad ++ - for taking notes
2. Burp suite - for tracking http request
3. Beyond Compare - comparing files/folders for Build Verification testing
4. XMind - mind mapping tool for project planning, infrastructure planning, test planning, test status reporting and test release documentation
5. Process Explorer - for tracking processes
6. Task Manager - for tracking processes and tasks
7. Batch scripts to execute mundane testing tasks
8. Windows Scheduled tasks to automate windows based tasks. Eg. running a server installation batch script daily at a specified time :-)
9. Microsoft Excel - reporting
10. Browser Add-ons

a. Firebug
b. Web Developer
c. Tamper Data
d. iMacros
e. Resolution Test
f. And others at http://moolya.com/blog/2011/03/04/addon-mindmap-for-testers-from-moolya/


15. Let us say there is a commenting feature for the blog post; there are 100 comments currently. How would you load / render every comment. Is it one by one or all 100 at once? Justify.

Option 1 - Loading comments one after another
Given the attention span of a human being and his thirst for the very next comment, loading one message at a time is a bad idea. It’s bad user experience because the application has to assume a set number of seconds for reading one comment to be able to load the next comment after ‘x’ seconds. This is a delay that might be acceptable for few and not acceptable for others. Many of us are curious enough to read the next comment even before the previous one completes. Given this user behaviour, loading one comment at a time is a bad idea.

Option 2 - Loading all 100 comments at one shot
Loading all 100 comments at one shot means there could be a performance overhead. Assume that each comment contains close to 30 words. Suppose, it takes about 6 seconds to load on a machine with 512 MB ram (Well, I have one at home ;-)). Loading 100 comments on the page means 100X6=600 which is 10 minutes. 10 minutes is HUGE time for a comments page to load. User would run away from this page. Loading all comments simultaneously is a poor idea. Moreover, this solution is not scalable as the number of comments increase over a period of time.

Option 3 - Gradual loading of a designated number of comments
Loading fewer comments while the user reads the previous ones is good design. Suppose 10 messages load at a time. By the time user starts going to the bottom of the page to read 8th message, the next set of 10 messages must get loaded slowly. This way, loading the comments is phased out and user’s attention is not lost too. I believe performance overhead is minimized as there is no stress on the system to load all messages at one shot. This is a scalable solution. Eg. “More” option on Twitter web interface

I would go for Option 3 above. Again this suits me as a user. If the context demands that comments have to load to satisfy specific user requirements, designers could still go for Option 1 or 2 above. Context rules!


16. Have you ever done check automation using open-source tools? How did you identify the checks and what value did you add by automating them? Explain.

I have used Microsoft ACT, Autoit and iMacros add-on for check automation.

How did you identify the checks?
Checks are tests that don’t need human thinking to decide the next course of action. A few lines of code can accomplish the same if coded well and remove human intervention.

To quote Ben Simo, “Automated checking can only process whatever decision rules someone thought to program when the checks were created. … Rather than look at testing as something to be either manual or automated, I encourage people to look at individual tasks that are part of testing and try to identify ways that automation can help testers evaluate software.”

What value was added?
Suppose, you need to create about 30 test email accounts. You could use iMacros tool and record the registration process. If a captcha is present, the program can be automated until captcha is encountered and then processed after captcha value is entered by any human.

I have been part of a team that used Microsoft ACT to write basic performance scripts to test performance tuning products. I wasn’t involved directly with coding, but using these scripts helped identify performance problems in the product we were testing. And of course, it saved a lot of time doing the same set of tests manually.

I have used Auto IT to execute Server Installation and Configuration for one of my projects. Auto IT script along with Batch scripting was used to automate server installation process which manually took 1 full day. With this script, the installation was run overnight and the server used to be ready the next day morning.

In general, check automation helps automate mundane and routine tasks and use the saved time to test features that need humans to think and decide the next source of action.




17. What kind of information do you gather before starting to test software? (Example: Purpose of this application)

Software information
1. What is the problem that this application is expected to solve. i.e., purpose of the software :-)
2. What is the history of this application?
3. Is there an existing application that was built for the same purpose, but failed to solve the problem. If yes, what were the limitations in that application?
4. What is the technology this application is built using?
5. Is there a competitor application already? If yes, what are they good at and what are they bad at.
6. What are the business objectives set for this application
7. What are the constraints in building this application
8. What are the features that are agreed upon to be built
9. Which features are prioritized over others


User information
1. Who are the users of this application?
2. Are the developers (testers, programmers and concerned support teams) of the application aware of how this application will be used?
3. What is the single most burning problem that they are facing - for which this application is built
4. What are the constraints (permissions and privileges) under which users have to use the application

Documentation
1. Existing documents about the application
2. New requirements documents
3. Online help (if any)
4. Documents relating competitor products
5. Researching on similar applications

People Knowledge
1. Talking to following folks can fetch more information about application
2. Actual stakeholders of the product (Business teams)
3. Sales professionals
4. Marketing professionals
5. Business/Functional analysts
6. Solution architects
7. Programmers (Give them a warm hug everyday, LOL!)
8. Experts who are experts on respective technology and applications
9. Domain experts
10. Support teams
11. Infrastructure teams
12. Senior management
13. Fellow testers

And of course, tons of meetings :-).


18. How do you achieve data coverage (Inputs coverage) for a specific form with text fields like mobile number, date of birth etc? There are so many character sets and how do you achieve the coverage? You could share your past experience. If not any then you can talk about how it could be done.

Data coverage is in exhaustive as its impossible to cover for all inputs as the number of variables increases in the system. For eg, if we have to test with different types of mobile numbers across the world, imagine the number of tests that need to be executed.

What needs to be done in such situations is to identify a sample of the data set that can be used as test data. This sample must be an optimal subset that'll cover most of the heavily used test data formats. There are several tools that can be used to generate test data. These tools not only generate a decent sample of test data, but also support multiple character sets, languages, special characters and many other features. Following is a summary of a few of those.

GenerateData.com
Test Data generation
1. Names
2. Phone numbers
3. Email addresses
4. Cities
5. States
6. Provinces
7. Countries
8. Dates
9. Street addresses
10. Postal zip code
11. Number ranges
12. Alphanumeric strings
13. Country specific data (state / province / county) for US, Canada, UK etc
14. Auto-increment
15. Fixed number of words
16. Random number of words

Test File generation
1. XML
2. Excel
3. HTML
4. CSV
5. SQL

Hexawise
1. Pair wise testing using multiple variables
2. Valid pairs
3. Invalid pairs
And a lot more at http://hexawise.com/.

Allpairs
AllPairs helps with Pair wise testing. For eg, if your product needs to be tested on 3 different browsers of 2 version each, your “All Pairs” data is a tool that can be used. More at http://satisfice.com/tools.shtml.

Testomate
To be released. I have been a beta tester on this and it’s pretty impressive :-)

Feedback welcome, as always,

Regards,
Pari