QA - Blog

2023 highlights / 2024 prospects

Written by Marc Hage Chahine | Feb 2, 2024 5:00:00 AM

Marc Hage Chahine answers our questions

 

The year 2023 has been marked by technological advances and paradigm shifts that have redefined conventional approaches to testing. The adoption of artificial intelligence by some companies has helped optimize their test campaigns, increasing efficiency and test coverage. At the same time, automation has seen strong adoption, significantly accelerating production cycles. Accessibility issues have taken center stage, prompting test professionals to integrate practices that ensure applications are usable by all. 
As we look back on these challenges, what are the prospects for the current year? Our colleague Marc has the answers. Through his perspectives, he presents the highlights of the past year, while anticipating the emerging trends that will shape the software testing landscape in the coming months. 

 

Let's explore the year 2023. Overall, what have you seen in the testing field? What were the highlights of the past year?

 

The year 2023 was a rich one. However, if I had to pick out 2 highlights from 2023, they would be  

 

  • The multiplicity of skills required: security, accessibility, performance, API test management, implementation of strategies, application of different methodologies… all to develop a quality product.   
  • Technological advances, particularly those beyond our knowledge. AI is a very good example, with Chatgpt unveiled to the public at the end of 2022. In concrete terms, we still don’t really know how to test all the contours of AI. Difficulties are also being experienced with increasingly complex architectures.

 

Which skills were most in demand in 2023, and will therefore be the trend for the coming year?

 

  • For me, the skills most in demand at the moment are essentially technical skills: the ability to automate tests and test APIs.  
  • The BDD approach is also an important point, as are all agile methodologies.  
  • I would add the notions of performance and accessibility, which is currently becoming a priority with the RGAA standard (French accessibility guidelines based on WCAG2). 

 

As a highlight, you also mentioned: technological evolution and the fact that we are at a stage where technological advances such as AI, exceed our level of knowledge. Anything new presents challenges. What do you think are the AI-related challenges faced by testers?

 

Artificial intelligence (AI) represents a major complexity in the field of testing. The test perimeter is not clearly defined, as AI is not constrained to link a specific output to an input, making testing difficult. Working with data, selecting the right algorithms and verifying results are crucial. Moreover, AI is prone to bugs, often unanticipated by the team, as illustrated by the following example shared by a friend 

 

—– 

 

The army wanted to use artificial intelligence to determine on a battlefield whether the tanks present were friendly or enemy, in order to decide whether they should be targeted by drones. They set out to do this by processing a vast amount of data, including photos. However, when implemented, the system proved ineffective. They then identified the problem: all images of friendly tanks were taken in good weather, with the sun shining brightly, while those of enemy tanks were taken in bad weather with clouds. As a result, the artificial intelligence was actually just reporting weather conditions rather than distinguishing allies from enemies. 

 

—– 

 

These challenges, often obvious to humans, are not necessarily so to machines, underlining the importance of considering them during test campaigns. It’s also vital not to depend entirely on AI, to see it as a decision-making tool rather than delegating the final decision to it, and to carry out regular reviews with necessary adjustments in case of error, recognizing that AI, just like a human being, can make mistakes. 

 

In practical terms, how do you test AI, especially when you're dealing with a platform like Chatgpt where there are a lot of mysteries?

 

Practical experience and training are essential to familiarize yourself with AI, understand how it works and enable informed comparison with other tools. 

 

As we approach this subject, we see that technological progress is pushing us to redirect our attention towards more fundamental aspects, so to speak, those that guarantee the well-being of the creator (such as the developer or tester) to simplify the development process, the well-being of the user, while respecting ecological imperatives. 3 key principles of the concept you call: Sustainable Quality. Can you tell us more?

 

In the context of digital services, the possibilities seem endless, but exhaustive testing is impossible. The key lies in focusing on the notion of value, and in particular on sustainable quality, which implies maintaining quality over time. However, digital technology presents three major issues: firstly, its growing environmental impact requires an eco-design approach to guarantee its long-term viability. Secondly, the issue of designers’ well-being is emerging, with sustained work rhythms sometimes leading to burn-out and disengagement. Finally, it’s crucial to remember that digital services are first and foremost services, designed for users. Deviations such as an orientation towards profit rather than the real needs of users are to be avoided. 

 

Let's stay on the "users" point. Another topical issue: accessibility, with the RGAA standard (French regulation based on the WCAG2 standard) soon to come into force? Why has this subject suddenly become a priority?

 

Accessibility, although a long-standing issue, has never been a priority. Its current importance stems from its forthcoming mandatory nature, similar to the way the RGPD has increased the importance of security. Penalties for non-compliance are not very severe, but companies must anticipate and assume the costs involved in bringing their sites up to standard. 

 

To conclude, in 2024, if you had to give priority to one subject, what would it be?

 

For me, the priority lies in sustainable quality, but everyone should determine their own priority topics according to their preferences and skills. I liken it to deciding what to test on a piece of software. The people involved in testing should think of themselves as the software itself, assessing their knowledge and preferences to decide which topics to invest their time in to progress and improve. This may involve one, two or three topics, depending on individual preferences, whether technical or otherwise. Guidance should also take account of market skills and demands. This is my recommendation.