How Was At Software Testing World Cup ?
Every contest, no matter the domain, has a good impact on me and brings new ideas to life. We all know that famous quote saying:
Stop competing with others. Start competing with yourself.
But it is also true that when you feel the pressure of a contest you start putting all your wheels in motion and create big things. I think this is the main impact a testing contest had on me, opened my eyes thinking new solutions, new ideas, making me more creative in a limited context (limited by time and by knowledge of the application under test).
What was all about?
I have heard 2 years ago about the Software Testing World Cup, but only last year I decided to form a team of 4 and participate to the contest.
Long story short, the competition rules are (as per STWC official rules):
- The testing period is three hours (3h) during which your team has to test.
- The actual SUT (Software Under Test) will be announced at the start of the period to guarantee fair play for all.
- For three hours, Matt Heusser, our lead judge, will act as “customer” on audio and video; you can ask questions using the comment and chat features on YouTube.
- Teams will produce two major deliverables: bug reports and a test report.
The software under test was picked to be a mobile application and we also got information about the device limitations — for iPhone, iPhone 5/iOS8 and any higher phone and OS and for Android it was supported v4.3 and higher and resolution HDPI and higher. Also tablets and iPads were supported.
We started our preparations one day before the competition. The first step was to get familiar with the team members. Although we were at that point colleagues at the same firm, we needed some time to catch up and to know each other better. Then we went through the bug reporting tool in order to get familiar with it. One important step was to set roles for each member of the team. We assigned a role for Product Owner and roles for Testers. The Product Owner was in charge to create and update the test report, to test the application and to collect information from all the Testers with regards to what they tested, what bugs they found and the severity of them. The Testers were in charge to test the application, to report bugs and to bring input on what and how they tested. To cover as many devices as possible with different operating systems, every Tester had a different configuration of device and OS to test on.
Our main focus was on delivering a good testing report. We knew that the testing report should:
Help the decision maker figure out whether he is ready to “ship” the product, whether it needs more fixes, what to invest in next.
The test report contained the following information:
- the variety of mobile devices and platforms;
- usability and user friendliness;
- what areas were covered by our testing activity;
- external factors as internet connectivity loss, interruptions (phone calls, sms);
- metrics — chart reflecting the number of bugs and their severity;
- non functional aspects — input about how the system behave under high load.
We tried to give our input as Testers with regards to functional and non-functional testing, usability and we raised concerns about the user interaction giving also some suggestions on which areas could be improved.
One of the good things that eased our work regarding testing the non-functional aspects was that the application was heavily used during this contest, revealing a bottleneck of the application under high load. Not only the application started to run a bit slower, we also encountered all sort of random crashes and returned unexpected results when performing some actions.
Would I do it again?
One of the best lessons that I had learned from this contest was that, although it is important to find bugs in the application and to log as many bugs as you can, it is also very important having a detailed test report to stakeholders, providing them metrics about the bugs found and testing coverage.
Photo by Kyle Glenn on Unsplash