The job of a tester is not only a human one, but there are also many challenges linked to the technical aspects of testing.
This article presents some of the technical challenges we see most frequently.
Software is not isolated in its environment. They communicate with other software via APIs. In order to have a functional product, it is essential to ensure that it can interact with certain surrounding software. These software interactions therefore need to be tested.
These tests can be disorientating for testers, as APIs don’t offer graphical interfaces (software doesn’t need them to communicate with each other) and require specific tools.
Additionally, while it’s easy to master the APIs of our own software, it isn’t necessarily the case for the APIs of the software with which our product interacts.
Finally, these highly codified interactions can generate architectures that are more or less easy to understand.
There are several reasons why you shouldn’t be afraid of API testing and APIs in general when you’re a tester, even when you’re a functional tester with few “technical” skills:
There are a number of API testing tools, such as Postman, that are accessible and relatively easy to learn,
API tests are generally not particularly complex functionally, as they are standardized messages in which variables are passed. Personally, I like to think of API tests as form tests, in which you check the different values that fields can take,
It’s easy to multiply API tests very quickly. In fact, once you have a message, you can run numerous tests (including data-driven tests) based on the same basic message.
API testing is a good first step into the world of “technical” testing, because it’s easy enough to get to grips with once you’ve got your hands dirty. Similarly, in an Agile context, it is increasingly important for testers to be able to intervene in different aspects of testing, and API testing is a skill that is increasingly in demand.
This is not a new challenge! I’ve been hearing about test automation since I started working in 2011. In fact, I have no doubt that testers have been hearing about automation for much longer. So, at first glance, it seems rather surprising to think that automation isn’t totally widespread. In fact, the proportion of automated testing has tended to stabilize rather than grow in recent years.
The reason for this is quite simple: it’s not easy to automate test execution. The tools are numerous, the needs and contexts even more so!
Many automation projects fail because the automation tool is unsuitable, the automated tests are too time-consuming to maintain, the automation goals and strategy are not clearly defined or adapted, or the automated tests are not sufficiently reliable.
To achieve successful automation, you need to select the right tool, train the people who will be involved in automation, identify the scope of automation and adapt the scope and tests to the context.
If you’re a functional tester, test automation can quickly become incomprehensible. In fact, non-technical testers need to develop their scripting skills (to understand and write automated tests) as well as their ability to set up and monitor automated tests. In this case, I recommend taking things step by step, starting with “simple” automation. This can be done with API testing or by using an already developed KDT (Keyword Driven Testing) framework, as seen with tools such as RobotFramework, or by using automation tools designed for functional testers, which allow them to familiarize themselves with automation and its constraints. I’m thinking here of tools like Agilitest.
If you don’t have any major difficulties with the technical side of things, then “only” the challenge of setting up and running the system needs to be taken up. The key is to:
Select the test tool to use so that it can handle the various testing requirements,
Propose maintainable tests using good code practices,
Ensure regular maintenance of automated tests,
Ensure regular follow-up of these tests, and keep the test campaign alive.
Integration of the right non-functional tests
Finally, we’re hearing more and more about non-functional tests. The most common are pen testing (popularized by GDPR), performance testing, adaptability tests (especially for mobile devices) and accessibility tests (popularized by the RGAA in France and WCAG in the rest of the world). The list of non-functional test types will continue to grow, in line with future usage and standards such as the RGESN for eco-design. As you know, it’s impossible to perform all these tests in depth, and a tester needs to know which non-functional tests to perform and to what depth.
My main advice here is to rely on requirements and “demand” to have testable non-functional requirements… or simply demand that on the points not addressed there are no requirements and therefore no need for testing!
I’m aware that the first part is utopian in many contexts. If the existence of such requirements cannot be contemplated, it may be worthwhile to tackle the subject of non-functional testing directly in the company’s testing strategy, with ways of selecting the non-functional tests to be implemented. This strategy can then be translated into test plans (project/product-level strategy). In the absence of quantified requirements, you will need to draw inspiration from the various standards (GDPR, RGAA…) or from what you observe on the market or in production if the product is already in production.
You could argue that test design isn’t technical. It’s true that you don’t need to know how to manipulate or read code to design good tests. However, designing quality tests is a highly technical aspect of a tester’s job. There are a number of test design methods (the best known to testers are specification-based) which identify, based on test conditions, which tests to run and with what values. Similarly, a tester needs to know how to prioritize and identify which elements to test, and to what extent, according to the risks and resources available.
First of all, you need to know your design techniques, their strengths and weaknesses, and how to implement them. But technical knowledge is not enough. It’s essential to understand the context and the product to be tested. This enables us to adapt to the context and propose a mix of techniques that will result in the most efficient test package possible. It is also essential to work in depth on the test data (certain design techniques give strong guidance on the values to choose for certain cases), so as to select data that will best identify the product’s various potential flaws.
Data management and test environments
This is a major problem for many testers! Test environments are not stable, not easily accessible or not close enough to production. Data is not representative, or not numerous enough, or not accessible, or not anonymized… The problem here is that, to be able to test a product correctly, it is essential to get as close as possible to production use, to simulate user behavior as faithfully as possible. Unfortunately, it’s virtually impossible to have an environment as large as that of production, in terms of both volume and interaction with partners. Similarly, relying entirely on production testing with shift right is not the solution.
Data and environment issues are generally quite complex.
Fortunately, there are a number of tools currently available to help us deal with these issues. I’m thinking in particular of environment virtualization, which enables us to create environments on the fly, thus avoiding the problems of environments shared by several teams, or environments with “already used” data.
For data, there are tools available (from publishers or in-house) that enable you to anonymize and take subsets of data from production to obtain representative samples.
For the partner part, a solution that is sometimes mandatory is to set up plugs, as partners don’t necessarily have shared test environments with our product, or the latter is too often “down”.
In short, there’s no miracle solution here, but rather a search for pragmatic solutions (often tool-based) linked to each context. The most important thing is to identify the most problematic points and try to solve them.
Finally, it also happens that environment and data problems can be solved by human intervention in contexts where test environments and data are not under the control of the teams using the environments and data.
Add below a CTA if need be (webinar, case study, interview, etc.)
Über den Autor
Marc Hage Chahine is a test facilitator working in the expert team of QESTIT. Creator of the French blog: “La taverne du testeur” and active member of the test community as a lecturer, book author, organizer and speaker at software testing events – he is part of the JFTL (French Testing Day) committee.