When we write automated tests, we want to add maximum value with least possible cost or time. The total number of test cases to verify that a large modern complex system is without errors is almost infinite. “Everything” cannot be automated. In this post we take a closer look at what cannot/should not be automated and why.
We need to think strategically to select a limited number of tests to automate. These tests should cover as large a part of the system as possible, be simple to implement, stable and reliable, and cheap to modify and maintain.
Test automation is about streamlining parts of the test work, at least the parts that are most suitable. Therefore, there are some categories of tests that we should not automate.
Tests that require intelligence and emotion
A computer completely lacks intelligence, like, taste and feeling. Therefore, it is not possible to create good, automated tests to evaluate things such as the following:
Is the layout educational?
Is the user experience pleasant?
Is it ergonomic to use the application?
Is it easy to understand how to use the system?
Is the user interface visually appealing?
Don’t spend time and energy trying to even verify parts of any of these aspects. A computer lacks the ability to evaluate these aspects. The tests become complex and unreliable and can, even theoretically, only verify a negligible fraction of what we desire.
Applications that are not yet stable (too early in the life cycle)
Tests that interact with the application via the graphical user interface (GUI) are very sensitive to changes. Seemingly small changes to the application’s user interface cause the tests to stop working. Therefore, it is not a good idea to automate tests against an application that is not yet stable. The fact that it is not stable means that it will be changed, modified or fixed, causing the tests to stop working.
Tests that interact with the application via the graphical user interface are, in comparison to other types of automated tests, complex, large, time-consuming and difficult to modify. Therefore, it is a good idea to wait to implement these types of tests until the system has begun to stabilize and no more disruptive changes are expected.
Applications that the tool has difficulty supporting (API, GUI)
Sometimes we come across components or parts of a system that are very difficult to implement automated tests for. It can be due to various things. An example is security where the whole purpose is to avoid bots or unwanted automated manipulation. Another example is proprietary components where implementation details are unknown or secret.
Implementing creative (often complex) solutions to work around the problem is usually a bad idea in the long run. The tests become erratic and complex. So don’t spend a disproportionate amount of time struggling with tests in almost impossible situations where the results will be poor.
Instead, spend the time and energy on implementing automated tests where it’s easy and successful. What provides utility and value are tests that run quickly, are reliable, robust, and easy to modify and maintain.
Test cases that have not gone well manually
If we have tests that didn’t go well when we run the tests manually, it means that the application is not really stable yet. Seemingly small changes to the application’s user interface cause the tests to stop working. Therefore, it is not a good idea to implement automatic tests via the graphical user interface (GUI) against an application where the manual tests didn’t go well.
On the other hand, this doesn’t apply to automated tests via an API (eg WebServices such as REST or SOAP). Because automated tests via API are comparatively, in comparison to automated tests via GUI, trivial, short, and quick to implement and easy to modify.
If we can test the system via API early in the development cycle, it is a good idea to start implementing exhaustive tests on that level. When the system has begun to stabilize and no more revolutionary changes are to be expected, it’s very successful to supplement with a smaller number of automated tests through the application’s graphical user interface.
Information on how the system works and support for the test automation engineer is missing
The most successful way is to implement the automated tests in parallel with the implementation of the system. The automated tests should be at as low test automation level as possible (Device level, API level, GUI level) to detect defects as close to the source (where they occur) as possible.
Test automation at the GUI level occurs relatively late in the development chain. We want the development of the system to have stabilized and no radical changes or redesigns are expected. This, as mentioned above, is due to the complexity and size of these tests and the cost and time of modifications and maintenance.
It’s important that those who write the automated tests have knowledge of how the system is supposed to work in detail. If the knowledge is lacking and the team, organization or someone else can’t convey that information, it is very difficult to create meaningful automated tests.
About the author
Viktor Laszlo is an expert in automation and for more than 22 years he has worked to streamline software testing and development both internationally and in Sweden. Viktor has extensive knowledge in system development and programming as well as in developing tools for functional and performance tests.