Darstellung einer digitalen Checkliste: Eine Hand interagiert mit einem virtuellen Häkchen über einem Laptop

AI in Practice: Our Proof of Concept Shows That AI Based Testing Is Already a Reality

While others are still discussing how artificial intelligence could improve testing processes, we’ve taken action: Our PoC shows how AI can enhance QA and increase efficiency.

 

Our goal: to automatically generate and validate test cases using AI – significantly reducing time, resource use, and error-related costs.

 

Our PoC Proves That AI-Based QA Works

 

Many companies face similar challenges:

 

  • Growing complexity in IT landscapes overwhelms testing teams.

  • Business knowledge is only sporadically available, although it is essential for effective tests.

  • Expectations for high test coverage and efficient processes continue to grow.

 

Manual test creation struggles to keep up. Our aim was to find out how an AI-powered workflow could support test design and evaluation. A real use case with real data and measurable KPIs was crucial for us.

 

 

Smarter Testing with AI – How Our AI-Based Workflow Works

 

With our QESTIT AI Workflow Framework, we’ve implemented a seamless process in which key steps of test design are handled automatically by two advanced language models (GPT-4o-mini and MistralAI):

 

  1. 1. Processing of business and user documentation: Understanding the requirements and the system for generating test cases.


    2. Test case generation using contextual prompts:
    Generating appropriate test cases based on the analyzed information.

     

  2.  
  3. 3. Quality assurance through automated evaluation and redundancy reduction: Optimizing test cases and ensuring compliance with layout and structural standards.

     

  4.  
  5. 4. Deriving insights for applicability to other use cases: Scaling the approach.

 

The workflow was tested in a real application context and can be flexibly adapted to other systems and processes.

 

 

Proven Benefits: Quality, Efficiency, and Coverage Increase

 

The outcome of our PoC:

 

  • 50% of the generated test cases were immediately usable

  • 40–50% higher test coverage compared to manual creation

  • Significant time savings from as few as 5 test cases

  • Efficiency advantages over generic AI approaches based solely on static prompting 

 

A particularly insightful test was the application of the AI workflow to a new, previously unused scenario. Despite no individual adaptation to this context, the system was able to generate high-quality test cases from the start. This shows: Our approach is not limited to isolated use cases – it can be transferred to other systems and processes.

 

 

Key Learnings from the PoC

 

Despite the positive results, the PoC also revealed important insights regarding necessary conditions and limitations:

 

  • Outdated or incomplete documentation leads to errors – high information quality is therefore essential. We support this with various AI utilities for assessing and improving documentation quality – for example, through automatic table optimization or the structured separation of unstructured documents.

  • Without documentation optimization, resource consumption and the amount of unusable data and formatting were very high. AI-supported enhancements significantly improved resource efficiency and the quality of generated results.

  • The solution’s maintenance and further development must be planned long-term – with a focus on flexible integration and ease of use within existing system landscapes.

 

 

Next Steps: Scaling and Integration

 

Our PoC has demonstrated the potential of AI in testing – and delivered measurable value. Through our workflow approach, we are making AI a productive component of modern testing. What’s coming next:

 

  • Exporting test cases into systems like Jira
  • Context-sensitive integration with Confluence for documentation analysis

  • Applying the workflow to additional systems and applications
  • Integrating automated test scripts for direct reuse

 

Together with our clients, we continuously develop the approach – aiming to implement data-driven improvements, increase performance, apply findings from our PoCs, gain new insights, and unlock new application scenarios. Our AI solutions are developed with a practical focus: they address real challenges in day-to-day testing and go far beyond marketing buzzwords. Our long-standing testing expertise flows directly into development and ensures our AI delivers real added value.


 

Conclusion

 

Our PoC shows: AI effectively supports business and domain experts, strengthens their role and evaluation capabilities, and helps make the software engineering process more standardized and streamlined. At the same time, it measurably improves the quality of test cases – leading to more efficiency and reliability in the entire testing process.

 

It’s not about theoretical concepts, but real, actionable solutions. What matters is not just experimentation, but delivering results. And that’s exactly what we’ve done – with measurable success.

Tobias Hilke

As Head of the Competence Center for AI at QESTIT, Tobias Hilke oversees all aspects of Artificial Intelligence. With recognized expertise in AI, process and change management, and compliance, he excels at bridging the gap between business units and IT teams. His focus lies in identifying the potential of AI technologies and translating them into customized solutions that deliver long-term value.

EXPERIENCE AND INSIGHTS Stay updated!

Get knowledge, news, inspiration, tips and invitations about Quality Assurance directly in your inbox.

share the article