From Nothing To Functional Coverage


I was writing in one of my previous posts about testing notes and how we got to the conclusion that we need to get rid of test cases and transform the concept of writing test cases into writing testing notes. Nothing too fancy, nothing too complicated, our lives seemed to be improved, time spent on documenting the test cases was reduced, the quality of the product was not affected and all the other testing processes seemed to work fine.

In theory, where both manual and automated testing are involved, the test cases written manually are used when deciding what is going to be automated. Having such a correlation between test cases and automated tests, it is easier to determine the percentage of coverage given by the automation test suite.

But…

It is clear that any decision we’re taking has many good parts, but happens to also have some bad parts, maybe not bad, but at least shady ones. This was the case when dropping off the test cases and introducing the testing notes. And we realised this when the following questions came up:

What about the functional coverage? How are we going to measure it?

For a while, we had no clear answer to this question. And this was because we could not made anymore a correlation between test cases, which used to describe certain functionalities of the application and the automated tests. We had a vague idea of how much the automated scripts covered the application’s functionalities, but this was more like a guessing, based on the experience and maturity that we had working with on the product. More than that, we were unable to estimate the functional coverage to a number (percentage).

As said earlier, functional coverage provides essential feedback for knowing what was tested and what was not tested and can not be deducted from the code coverage. Having 100% code coverage on an automated test suite does not mean there is 100% functional coverage. While for determining the code coverage there are plenty of tools, the functional coverage is very hard to be measured automatically, when there is nothing to compare to. Many voices say that functional coverage is hard to be achieved.

It became clear that for being able to measure an amount of functions that the application performs, we needed a model to report to, like a map of the whole product under testing. Building such a model, or map, it was more complex than writing down the menus and the name of the screens a user can navigate through the application. It was also important to set the depth we were going to define our map considering the risks in the current context. Having such a map we could start mapping the automated tests to the functions of the product. For a better understanding and usage, we have split the map in functional areas, helping us having a better overview of every major piece of the application.Building the map became an ongoing activity. Each time new functionality was added to the product, we were updating the map to reflect all the changes.

This new approach led to several advantages:

  • Helped in creating a picture of the actual functional coverage offered by the automated scripts.
  • Offered a higher level of awareness of all the new functions that were added to the product.
  • We were able to spot easier the gaps between the map and the automated suite, all these gaps being untested areas.
  • Helped the manual testers know which are the areas that are not covered by the automation suite and need to be addressed by the manual testing.
  • Helped in determining the priorities on manual regression testing. According to the map, manual testers will know which area has higher automated functionality coverage and so will lower its priority during manual regression.

We have been using this map of functions for more than a year until now and so far it is offering us a clear picture of the functional coverage of our automated tests.


What kind of techniques do you use to measure the functional coverage?


Photo by Tabea Damm on Unsplash.