Test automation can fail for many reasons. Today we will look at one of the major causes of these failures, maintenance.
Lack of maintenance leads to automation failure
Implementing automated tests is an investment. It almost always costs more to automate GUI tests than to do them manually. This principle is well integrated by all teams and their managers. It is in this context that a lot of time is allocated to the implementation of this maintenance, which aims to save time and money in the medium term.
The automation of tests greatly reduces the cost of executing them because a human presence is no longer required.
However, one factor is regularly forgotten, and that is test maintenance. Indeed, the maintenance of manual tests is not very costly in terms of time, and if the tests are not maintained between each campaign, it is always possible to maintain them during the manual executions. This is not the case with automated tests! The maintenance of automated tests is more expensive than that of manual tests because the tests must be modified, or recreated, as they have generally become code.
To reduce this maintenance workload, Agilitest allows you to modify the tests directly in the editor during their execution without having to replay them or re-record them completely.
We can summarize the different costs of tests with this example (note that the costs must be estimated and calculated for each product):
Workload manual vs. automated tests
As far as Agilitest is concerned, the high productivity of the software and the ease with which maintenance operations can be carried out means that the equivalence point can be reached much more quickly.
The maintenance of automated tests is particularly important because if they are not up to date they cannot meet their objective. The accumulation of unmaintained tests leads to an ever-increasing technical debt.
It leads to a vicious circle of tests that are either dropped from the campaign or worse, tests that are still failing but are ignored by the team because they are not up to date.
This behavior leads to gaps in coverage that result in critical or major failures in production that would have been detected by these tests. The test automation project then becomes obsolete and is abandoned.
How to avoid this maintenance problem?
In order to avoid the problem of maintenance, you must obviously maintain your tests. Unfortunately, the non-maintenance of tests, just like the technical debt of an application, is not something that is done willingly. This is why the team must put in place certain good practices to avoid this problem:
Limit the number of tests
This number must be limited so that the maintenance is sustainable by the team: this forces to make choices and to remove tests from the campaigns but this decision will always be known and the not executed tests will be chosen according to their utility. You can obtain infinite software quality with an infinite number of tests, but you will not afford it.
Build your tests in order to limit their maintenance
Automated tests are code. As such, good code practices must be used. As you know, tests regularly go through the same screens and perform common actions. With a bad architecture, changing one of these actions forces you to update all the tests. A good architecture allows, on the contrary, to make only one modification that will impact all the tests. To do this, it is necessary to create sub-scripts that form bricks, each test calling a series of bricks.
Agilitest obviously answers to this problem by making it easy to create these sub-scripts that can be called by other scripts:
Calling a login sub-script with Agilitest
Data Driven Testing with Agilitest using a CSV file
Moreover, each script (and therefore sub-script) can be variabilized. This allows to centralize even more actions and even in some cases like form tests to have a single script that allows to do all the tests during the completion of the form, and to iterate in mass using CSV or JSON data files.