Category Archives: Uncategorized

Giving Testing its due

Someone on Linkedin asked how to emphasise Quality in 3 steps in an organisation where it is apparently given the short shrift or not given the importance it deserves, I had this to say.

First up define in your head or clear the fog around the word ‘quality’. The word means so many things that I hesitate to use it in a generic manner. Alternatives I suggest for use is better functionality, easy to use software, fewer defects in production, fewer customer escalations, better test coverage, improving tester skills & so on….

Now in your current workplace if ‘quality’ is not given its due, my suggested course of action is as below.

Why is this is the case? Being in this place maybe you already know. Is it around people, process, tools, interpersonal relationships…? You need to understand the concerns why “quality” is not given its due. There should be no reason why it is not given importance because most people understand the value it brings. So try clear the cobwebs / misunderstanding / misgivings around it by first understanding what the malaise is and then go about systematically prescribing (rather reiterating) the virtues of how good testing can deliver the cure (emphasising all functions working together). Do this in a most strident manner, from the heart and justifying it with whatever you can find on the web (pre & post scenarios, debacles & what have you). Paint a holistic picture of the future scenario you visualise keeping in mind the organisation & the customer and you will be listened to. Be one-sided (whining selfishly about testing’s concerns) and you will be side-stepped and probably you will sink deeper into the hole than you already were.

Once you have the buy-in set about repairing what is broken within your function & its interplay with others. Maybe the processes, people, their skills, their confidence and so on. Collaborate and speak with the best you can find within and outside your company for the solutions. Ask for & arrange training, dig the web, mentor the guys.. Oh there are umpteen things to do but what, why & how much depends on your scenario…. Keep a check on progress..using metrics, trends, coverage analysis etc. intelligently & wisely.. drive & do not be driven by the numbers… check with the people who matter asking them for their feedback on progress.. make corrections.. as you near the horizon.. re-plan for new goals. Change is the only constant.. Evolve. Progress. Keep your ears on the ground & feet on the earth.

Empower people in your team to eat, drink and breathe ‘quality’. Every person in testing must be an ambassador of your team’s beliefs, vision and good practices. For everyone to speak the same language there has to be consensus within the team & for consensus it is important for leadership to do its role properly else the tail will wag while the mouth barks which is not what is needed.

Domain knowledge by chance

Should a tester acquire domain knowledge from outside to gain an advantage over others and give that supposed golden tinge to his resume?

If I have to answer this in one word I would say No.

The priority in my view goes to testing and technical (programming/scripting/tools) skills. So the tester should focus on what/how/how much to test so as to bring out information and unwanted defects faster and better than ever.

To become a better tester – he needs to listen to and learn from testing masters (one must listen to his inner self which becomes better with time if one consciously builds it and feeds it – one must never be a blind follower – Beware! the testing planet has groups bordering on cultism) be a part of testing forums, read and write about testing experiences, be open minded to changes and talk every night with oneself noting the rights & wrongs….. While he learns testing and gets better it would help if he understands the world that the developer lives in and picks up understanding on good coding practices, common development mistakes and gets acquainted with their thinking, what makes him proud or queasy and so on… The tester would benefit immensely by learning some scripting language and  gets the developer to mentor him and review his scripts. This bond would help not only him professionally but the team as a whole because the scripts can then be put to interesting uses like populating data, writing, reading logs and pick out errors and so on.. While getting on back-slapping terms with the developer the tester has to keep his pessimism and the reason of his existence uppermost in his mind and not fall into bad habits that the ‘smart’ developer might influence him into.

As far as the domain skills are concerned – when the tester gets into testing for a project – he must learn everything about the domain allotting time (through the supervisor of course) to it such that he does very good justice to it while performing the other testing activities. When learning the domain he must read the documents available and speak with the business users. He must make copious notes, share, brainstorm with colleagues, ask lots of questions, (learn mind-mapping) and so on… Good knowledge of the domain will ensure he tests by delving more deeply into areas that the business users would be looking critically at.

With experience the tester would have domain knowledge not really by choice but say because he worked with organisation(s) which catered to that segment/vertical or  he was allotted a project on purpose with a domain that he has worked with previously.  The collected expertise of the tester in domain and his big-picture knowledge opens the Business Analyst role also for Testers which is a new avenue to trudge… I think this is better than spending time on learning a domain by time-slotting it as against spending priority time on sharpening  testing / technical skills…

Grit

Determined at all costs!

I read this very good post on STP written by Bruce Butler and commented on it (as below) which raked up some emotion within me.

Your words in many places would appear to be rather caustic but I can understand the emotion behind it.

The barbs and the bitterness the job of reporting defects etc. brings about is sometimes hard to take in and I have been witness to my people ventilating these feelings choked as they are sometimes with red-shot eyes. I have also experienced it during my interactions with the development teams & their leaders. The atmosphere is dark & sombre but the job has to be done well by keeping calm & your maturity about you. If you have to hit out you simply must by finding more defects – drink the bitterness but pass the poison. It is nectar that we must try and eventually send to the customer.

The only consolation in words & visual imagery I would give my people & myself is that we are doing a service and someday the worth of finding the defects/ information/issues would be realised when the user smiles after using the product even though we may not be there to cherish it. I think we just need to knuckle down and press on… and peel the onions as you rightly said … even if we have to shed bitter tears!

The link to the STP article is here.

Compare Bugs? Oh No.

Question: I was doing a bug trend analysis yesterday to plan the future testing and found that the project had huge number of bugs when compared to the other projects. What should I usually look for in this case when there are huge number of bugs? The project is a technical change and am regressing to check the functional impact.

Answer:

  1. Assume the bugs trends you drew up is based on a bug-database which is optimised (reviewed and triaged) for correctness and reliability.
  2. When you say “plan the future testing” assume you mean that you will use this (bug colonies for example) to prioritise testing and focus efforts which is logical. Please do this but keep your wits about you as far as the overall plan of testing is concerned and what it has to achieve. (That is don’t let the bug count bias or hijack your thought-process even if the alarm bell has been rung by a higher-up).
  3. It maybe foolhardy to compare the bug count across 2 or projects and draw conclusions in the manner you are doing. For all you now, the bug count in your project may be less than what it is if you are comparing apples & oranges. So ask yourself the questions “Are 2 projects ever the same even if the same functionality is developed by 2 teams” and “What am I comparing and am I justified in doing so”. All I am getting to is comparing might be a hole you are digging and standalone analysis of the application under test from different angles maybe more worth your while unless you have very strong justifications to compare.
  4. The large number of bugs found could have any number of causes being complexity of application mismatched by developer skills, inadequate developer skills, lack of induction in to the domain for the developers, requirements knowledge not articulated or gaps in communication, lack of effort, inadequate planning or time allocation, very skilled testing team ;)) and so on… Adequate thought into such aspects along with discussions with the critical team members is a precursor really.
  5. Assume you have also segregated counts of the kind of bugs raised by module, severity & type (functional/UI/config./data/etc.)
  6. Assume you will use the analysis to correct course of testing and help better its value for the project/organisation rather than it being a mundane process exercise for fattening your organisation’s process repository.

Test Confidence

Someone on Linkedin asked a question on how-to derive a “Test Confidence” ratio? Here is how I responded…

I think a “Test Confidence” is best derived by experiencing the application first-hand over a period of time or soaking the expert opinion of someone or some people who have. Whether it is bugs or issues or plain-speak about application behaviour / functionality / performance / usability / other ‘ities’ what you/they feel based on tests that have been performed will be very helpful indeed. Maybe a questionnaire can be compiled with some rating system for the tested application. If this is answered by people who know the business that the application supports and have diligently traversed its length & breadth like a passionate tourist would in a city he has visited armed with atlases and guides’ in tow then you have information which is reliable and can both support & justify your cause.

This information is rarely captured and presented since management trust numbers being the easier to present n peek and graphs based on it are such pretty things.

Numbers/Metrics will have to be analysed and interpreted carefully.

If it is a decreasing count of defects across various phases of testing or across different cycles of the same phase (say System Testing) you will have to determine that the environment is the same (different browsers, versions, OS, versions, etc.), testers are the same or having the same skill, build has been more or less consistent in terms of features developed / delivered for testing and fixes have been brilliant and not caused any regression and there has been as much time spent actually on testing as planned by you. If this is the situation you are in – your job is done.

But as you know, it seldom is.

Metrics with weights for each factor mentioned above to make the eventual number less or more maybe the way to go if there is a lot of pressure on you to produce a ‘Test Confidence’ number. The risk is that the numbers more likely gets convoluted beyond a point and may stop telling the story you set out to. If this happens, then defending the numbers rather than providing an no-holds barred assessment of the application tested would I think jeopardize the existence of both the testing team and the organisation it belongs to!

Hope Soars!


Hope” is the thing with feathers–
That perches in the soul–
And sings the tune without the words–
And never stops — at — all“.
— Emily Dickinson —

“Hope” is the thing with feathers–
That perches in the soul–
And sings the tune without the words–
And never stops–at all–
Author: Emily Dickinson

Expecting to Achieve

It is funny that we think about what we want from our testers more often during recruitment or during their performance reviews. Is this correct? No. We should be thinking about it consciously & unconsciously all the time so that we correct the course for them and help the test professionals move along the path to progress and prosperity in their working lives. This also ensures that the test professionals are motivated, aware and prepare for what lies ahead and maybe beyond them for now.

As a Test Leader – here are my very high level expectations from testers from their birth to their coming of age (well.. almost… but then do we ever stop growing in wisdom and bettering ourselves).

The expectations are like a knowledge pot that fills up with goodies as he moves on from one milestone to another as he discovers, digs, fights, raves,  exults and much much more  in his journey from being novice to better to best. (hear that applause growing louder from a clap to a thunder-claps – aah recognition!)

Test Trainee – Conceptually strong
– Deeply interested in testing
fresh out of Edu. / Trng Institute
Jr. Test Analyst – Experience of employing testing techniques to discover bugs and unravel unknown appln. information. < 2 years
Automation Engineer – Can manual test v efficiently.
– Understands Automation pros & cons
– Technically adept
> 2 years
Test Analyst – Elucidates knowledge of nuances of phases from req. understanding to design to bug triage to regression testing  fixes/changes. > 2 years
Sr. Test Analyst – Can test & tear with ease.
– Strong domain, technical & testing  skills
– Very good commn. & negoiation  skills.
– Aware of team dynamics
> 4 years
Sr. Automation Engineer – 2 yrs Exp with working on Tools
– Can automate without a tool. (Eg. with Perl/Ruby scripts)
> 3 years
Test Lead (Automation) – 2 years  Exp in leading automation teams.
– Designs holistic solutions with an eye on future.
– Can make best use of diverse technology to achieve ends.
> 4 years
Test Lead – 2 years  Exp in leading teams & strategising testing
– Uses tools effectively.
– Very skilled in problem solving / multi-tasking.
– Collaborates & sets expectations across teams without fuss.
– Slippage handling with finesse
> 4 years
Asst. Test Manager – 4  years  Exp in leading teams & strategising testing
– Collaborates  with business & prioritises tasks accordingly.
– ProActive risk mitigation
> 5 years
Test Manager/Leader –  Recruit, scale and  build the team with the right elements and right mix with vision.
– Ensure leadership, management & utilisation to achieve timelines & drive business ahead.
– Coordinate correct and timely communication to all concerned.
– Ensure preparedness of team in respect of testing, domain and technical aspects through training and mentoring.
> 8 years

Personality traits like Maturity befitting the age, good attitude towards work (testing) & peers with an demonstrable ability to bond, listening skills and industriousness are default expectations for all roles.

While experience is important in judging competence of an individual it must be taken as an input to be verified and by no means as an unquestioned authoritative vote of competence.