How Docker Can Rock A Tester’s World?

How Docker Can Rock A Tester’s World?

I first heard about Docker a couple of years ago. By that time, it appeared that our team liked the idea of adopting the newest and coolest kid in town and lot of questions were raised about how to migrate from using Vagrant to using Docker.

All the concepts that Docker brought, images, containers, packed and shippable applications, become very popular between the devs in our team and triggered a lot of interest in getting started using them.

Despite all this enthusiasm on the development, on the QA side we did not have a clear image on how using Docker is going to impact our work or which would be the benefits of using it. By that time we had both manual and automation testing as part of our testing routine.

First thing first.

What is Docker?

Docker is an open source platform used for developing, shipping and running applications. Has the concept of containers, which are isolated environments where the application can run. A container is a runnable instance of an image. I am not going to reinvent the wheel, as such I am going to leave here the detailed information offered by Docker’s documentation.

How can Docker become part of a software tester’s life?

One of the biggest challenges that I see for a tester is that s/he is permanently waiting for a functionality or a feature to be implemented by the developer and be ready to be tested. Even if during that time, the tester is busy writing test cases or doing some other preparation tasks, nothing changes that waterfall-style of developing-testing a feature. Coding then testing. What about coding and testing?

How many times didn’t you get a feature to be tested when the time pressure was so high that you had to calm you down first and pray that no defects are going to be found?

It happened to me. A lot.

And despite the time limit, when defects were found, code needed to return to the developer to fix them and then come back for testing. And usually, everyone’s blood pressure is getting higher in this kind of situations, right?

What about giving an early feedback?

When using Docker, you can overcome a big part of this problem. You can do this by regularly building the partial running code from a certain branch and deploying it locally or on a testing environment, even when the code is not covering the whole acceptance criteria. By using this practice, you, as a tester can give an early feedback about the code implementing that certain feature. You can pin bugs, if there are any, check if the requirements are met, give feedback about the business logic if there is the case, and give suggestions on use cases to be covered.

What about deploying again and again?

I am continuously encouraging testers to be as independent as they can in maintaining their test environments by being able to update them with the latest code as often they need it to. This might be done over and over during a working day. But in some projects context, a tester needs to have super ninja linux skills to be able to compile the source code and put it where needed and there are situations when the super ninja linux commands are missing. Or cases when testers rely on developers time to update their testing environment. For these cases, Docker offers you the possibility to build containers, which is more easy. You can create, start, stop, move or delete a container using the Docker GUI or the Docker command line interface client.

What about…

“Works on my machine”.

Also, Docker helps removing the well-known cliché, “works on my machine”. Developer and tester are testing the same image, containing the same code with the same libraries and dependencies and there is no room for “works on my machine” anymore.

What about using a shared testing environment?

Did it happen to you to be forced to pause your testing on a testing environment because the services your application were using just stopped?

Yes, I know the feeling.

This particularly happens when more than one tester is using a testing environment and some other folk decides to deploy the code s/he needs on that server. The downtime is not big, but it is enough to make you put your work on pause or to lose some certain configuration you needed for testing.

Here comes Docker to help by enhancing you create your own testing setup of the application, without being worried someone else may alter your data. Every container that you are running is running in isolation.


We took a step forward and used Docker and Selenium to run our automated tests. I am not going to reiterate the setup needed for this, as I have found a very useful and explanatory blog post where you can find a step by step guide of running an end-to-end test using these two technologies.

Happy docker-ing!


Disclaimer. The decision of using Docker by the QA folks can be project dependent and the approach I presented might not be applicable in any situation. This is the way we used it and it worked, but you can find other way to tackle Docker inside your testing routines.


Photo by Sergey Zolkin on Unsplash.