QA - Blog

Generating test cases with an ai tool relevant or not

Written by QESTIT Team | Jun 23, 2024 12:34:23 PM

The field of information technology, especially within the realm of agile development, is undergoing rapid evolution. To keep up with this dynamic pace, emerging technologies like Artificial Intelligence (AI) are proving to be invaluable allies, not only accelerating progress but also providing essential support during this era of rapid transformation. 

 

AI comes in various forms, ranging from specialized to more general-purpose applications. Specialized AI can autonomously generate code without extensive programming expertise and summarize web pages based on URLs. On the other hand, more general AI tools can assist IT professionals across multiple levels, expediting code development, clarifying specifications, and even supporting the creation of test cases—the focus of this article. 

 

AI and Testing

 

Testing plays a critical role throughout the development cycle of an IT project, irrespective of the methodology used (e.g., V-cycle or agile). It serves to identify and rectify serious or minor malfunctions at their inception. By addressing potential bugs prior to delivery, the product’s reliability is assured in the eyes of end users. 

 

Despite its significance, testing frequently garners less attention within a project, with allocated time often being limited compared to other project phases. This constraint can compromise the thoroughness of testing efforts, emphasizing the need to optimize time utilization for testing activities.

 

The integration of AI into the testing process optimizes the utilization of QA team time. AI can effectively assist us in maximizing available time by generating test cases in various languages, including natural language, Gherkin, and even English. While AI responses may not always align perfectly with the software’s testing requirements, they significantly reduce workload and save time by providing an initial draft of test cases. 

 

A helpful AI tool

 

One notable AI tool in this realm is Gemini, developed by Google and accessible via the URL https://gemini.google.com/app with a simple Gmail account. Gemini stands out for its ability to interpret natural language instructions and generate corresponding test cases. 

Its interface is intuitive, featuring a text area where users can input requests, specifications, or user stories; information can also be submitted via voice dictation using the microphone in the query field. Responses obtained may vary in completeness based on the query’s precision. If the provided responses fall short, a collapsible block offers three alternative suggestions from the AI. Additionally, users can request alternative formulations of responses, such as longer or more professional versions. The resulting responses can be copied to the clipboard for easy integration into our testing tool. 

 

Gemini is user-friendly and allows users to maintain a history of their queries. However, it’s crucial to maintain vigilance regarding the quality of AI-provided responses, which can occasionally be incomplete or inaccurate. 

 

Here is an example of a Gherkin test case on Gemini:

 

 

After clicking the search button, the query result displays as follows: 

 

 

The AI provides several accessible results on the prompt:  

 

 

The response provides more or less detailed test cases, and the QA tester can add or remove them based on the granularity desired for the campaign.

 

AI vs. Human Expertise

 

Although AI has made tremendous progress in automated test case writing, it cannot yet completely replace the work of a human tester. Writing test cases involves more than just translating specifications into instructions; it also requires a deep understanding of the project’s context and requirements. Human testers bring critical expertise in identifying relevant test scenarios, detecting potential flaws, and validating test logic.  

 

AI can certainly facilitate and expedite the process by offering initial suggestions and automating certain repetitive tasks. However, manual writing remains essential to ensure the quality and relevance of test cases. Furthermore, human testers contribute discernment and creativity that are difficult to replicate with algorithms. 

 

Summary

 

In conclusion, AI can be a valuable aid in initiating test cases, simplifying and accelerating the process, but human intervention and manual clarification of requirements are often necessary to ensure the relevance and effectiveness of the generated tests. In many ways, AI, which is becoming ubiquitous in our society, can make our lives easier, but like any tool, its use must be thoughtful to fully realize its benefits.