Evaluating Testers Work

An interesting question that keeps popping up on all major testing forms concerns the evaluation of the performance of the tester. The tester’s work is not mysterious but probably made by the thought   processes that follow them and the contradictions inherent about testing being fast and simple for some to those who think (and rightly so) that it is hard work, requires creativity and many disparate skills like problem solving, negotiation, initiative and adaptability and above all an innate skill to look beneath the right stone and find hidden snakes quickly, if you know what I mean.

The approach I recommend is borne out of collection of data over a period of time via discussions in a informal manner culminating in a formal data collection, analysis, review and discussion supported with testing metrics. Discussions with the test personnel and careful listening helps me to understand the mind-set and its depth as it talks about testing this and that using such & such techniques and the manner in which problems are solved and so on.. Likewise listening to his peers or seniors also helps the continuous evaluation of the tester.

I have always been a numbers man but I believe one needs to understand and analyse them carefully before using them to validate ones judgement about people (direct for people we interact with/mentor regularly or gathered for people down the chain of hierarchy we may not be managing directly and interacting with less regularly). This must be supported with feedback from other supervisors/seniors who matter. Numbers do tell a story but be aware about the fact that they can mislead if the reading is off. The drilling down beyond a number is damn important at most times. Sometimes you may even need to throw numbers out of the window along with a revised strategy of data collection since the purpose may have been clarified since. So being a slave to numbers is detrimental and the other way round is what helps.

Now the nitty-gritty of how a systematic tester appraisal can be done.

I designed 2 sets of forms for performance evaluation for the test analyst (tester) and the test lead.

For the tester the form has 3 sections for
a) Technical (70% weight),
b) Soft (25% weight) and
c) other (5% weight) skills.

(a) Under Technical 2 categories used being
– System Understanding and Test Design
– Test Execution and Reporting

System Understanding and Test Design, objectives are
i) Test Cases whether easily understandable,
ii) Understanding of business functionality how well translated to test cases
iii) Domain Understanding
iv) Completion of design as per schedule

Test Execution and Reporting, objectives are
i) Max detection of defects
ii) zero defects slipping to customer
iii) quality of defects
iv) defects documented clearly
v) completion of execution as per schedule
vi) minimal defects are marked cancelled

(b) Under Soft Skills I evaluate the following
i) Analytical and Problem Solving
ii) Communication
iii) Maturity
iv) Adaptability
v) Ability to take initiative
vi) Interpersonal Skills
vii) Negotiation Skills
viii) Assertiveness
ix) Tenacity
x) Process Orientation
xi) Cultural Alignment – Geographies
xii) Ability to see the big picture
xiii) Ability to coordinate/organise
each of the above is explained briefly but clearly in the template as to what it means. For example “Analytical & Problem-solving” is “Clarifying description of the problem, analyzing causes, identifying alternatives, assessing each alternative, choosing one, implementing it, and evaluating whether the problem was solved or not.”

c) Other is reserved for personal initiatives like training, participation in corporate initiatives etc.

For each objective, a measure and a *guideline* explains how the supervisor/self can rate the objective. Alongside the guidelines one has to use judgement too and even rely on the supervisor’s word by backing up with whatever evidences, feedback one can get.

For the Test Lead the form covers aspects like estimation, strategy, review effectiveness, Reporting (under Technical Skills) and a new section for Management Skills  covering commitments fulfillment, pro-activeness in responding to customer, liaison with stakeholders, team management and crisis management.

The Soft Skills part varies (Vs. that for Tester’s) with inclusion of
i) Analytical and Problem Solving
ii) Communication
iii) Maturity
iv) Adaptability
v) Ability to take initiative
vi) Interpersonal Skills
vii) Negotiation Skills
viii) Assertiveness
ix) Tenacity
x) Process Orientation
xi) Cultural Alignment – Geographies
xii) Ability to see the big picture
xiii) Ability to coordinate/organise
xiv) Leadership Skills
xv) Team Building

I rounded off both the forms with a development plan listing training needed with time frames etc.

As with most performance evaluations, there has to be transparency, professionalism and a sense of fair play when it happens. Respect for the individual (for the professional that he is, the capabilities he has and those that he doesn’t as yet) and keeping personal biases aside are attributes needed to be displayed such that the anxiety that accompanies it is replaced with a positivity that will only foster the relationship and help team spirit.

Advertisements

2 responses to “Evaluating Testers Work

  1. hmm.. very well put Sandeep! I could definitely say its the output of experimented and evaluated practical approach which anyone in Testing world would understand & appreciate.
    When it comes to evaluation, one thing stand up in my mind which is the normalization process that happens after the initial rating given by the immediate supervisor. The Myth/ theory/ assumption/ experience of the so called evaluation group (mostly HR) feels that the overall average rating of all the employees put together should be X number, which put few of the individual ratings on stake.. what do you think about this? i feel offended when a group of people who never knew you/ your work/ contributions/ reasons for your behaviour and take the ‘final’ call based on whatever aspects which they only know…

  2. The normalisation process has to weigh in the consideration of the amount available for distribution to the appraisees and linked with that organisation tend to want to create baskets where x% would be top rated so paid B amount, y% the next to top rated so paid C amount and so on…

    I do agree that it is not a nice feeling when the paycheck is not heavy as you expected since your performance was ‘normalised’ vis-a-vis say the contented feeling you had when the performance review happened. It’s seemingly like a punch on the nose after the caress.

    Unfortunate but that’s a reality in many a organisation… What could help matters could be the process and some transparency of what goes on behind closed doors.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s