3 Things Your Manager Should Know About You As Tester
Let’s face it, not everyone knows or cares what a tester is doing every day and how is she/he doing the testing. The most common opinion that I have heard is that testers are doing quite a boring job, clicking around and trying to find defects or to break the application. Well, that’s true, but there is much more than that.
During the years I have dealt with many typologies of managers. From ones seeing the testing activity as part of the development process, understanding what software testing is and what is not to the ones seeing software testing as some additional thing to do after you finish the development, some kind of checking that the door is closed.
You can easily spot which ones I prefer, don’t you? But we don’t have the possibility to choose, not every time, that’s why we need to adapt.
A good way for us is to sit with our manager and set together the proper expectations with respect to the testing activities. I got to the point to think that a manager should know the following about software testing and more than that, about testers.
1. Software testing is not bulletproof
If you have a person or group of people called “testers” exercising the functionalities of an application and struggling to find any possible defect, does not mean that after they give the “go-live” approval, that application will be defects free.
Usually, the software testing includes also the following:
- understanding the business of the application under test;
- designing the scenarios to test a specific functionality;
- thinking to all possible scenarios an user can exercise the functionalities of the application;
Having all the above accomplished and having a very good testing coverage, being it manual or automated, doesn’t make you 100% sure that no defects are going to be found by the end users. Also, if an application is tested does not mean it has zero defects.
2. Software testing is hard to estimate
I’m getting very often the following question:
“When do you think the user story are you working on can be put in Done column?”
And my answer comes:
“It depends.”
Well, yes, it really depends.
When it comes to estimate the effort of testing, I have seen that many testers are making more like some guesstimates or they led themselves driven by their own gut or experience. And that is because in most of the cases, testing a certain functionality is much more than going through each step of the acceptance criteria, do some clicking and deciding which step passes or fails. It also requires a closest look to the following aspects which might become variables in your estimation equation.
- Encountering other defects while testing your piece of functionality. This leads to investing more time than expected in trying to figure out how to reproduce those defects or if they are blockers for you or not.
- Running into user acceptance criteria inconsistencies which requires more time spent with the product team to clarify these.
- Regressions. Or parts of the application we wouldn’t expected to be affected by the latest code.
- Finding defects and spend the time in discussing and documenting them. And then wait for those defects to be fixed, for you to be able to make confirmation testing and then regression testing to be sure that the fixes did not break something else.
- Change requests.
- Testing servers down time or failed builds.
3. Rushing the testing before the end of the cycle is not the best solution
If you are working on cycles (Sprints if you follow the Agile methodologies) you know well that it happens very often that testers get the code to be tested 2–3 days before the end of the Sprint. Obviously, this is not a rule. There are cases when true Agile works, and testers and developers are working together in delivering the functionalities.
But, let’s think to the first scenario and assume the Sprint is two weeks long. You, as tester, spend the first week preparing the test cases, thinking and documenting the test scenarios based on the user acceptance criteria, prepare some test data or even write the skeleton of your automated tests. Maybe test some of the user stories that are finished to be implemented. Then there come a couple of days when nothing much happens. And after that, here you are one day before closing the Sprint and more than half of the user stories part of the initial commitment are suddenly ready for testing.
When you happen to have such bottleneck, better propose to your manager to look closer to some other aspects of the delivery process too. Reaching to the point when testers get overwhelmed at the end of the Sprint and rushed to close all the user stories might be caused by some other process aspects that are not working very well.
Take away! :)
I think it is part of our duties as testers to create awareness of what software testing means and what does it implies. I would also like to encourage the testers to work with their manager in setting accurate expectations with respect to their testing activities and to not compromise the quality of their application under test because of some external pressures.
Photo by Curtis MacNewton on Unsplash.