How AI Testing can Increase ROI for your Business
Grow Your Business, Not Your Inbox
There is no doubt that software systems are one of the most complex and least reliable technological systems that human beings have constructed. Software engineers must usually make assertions about the reliability of software systems after having observed only an insignificant fraction of the possible states of the system. The rapidly growing and widespread presence and profound complexity of software systems has brought to the picture fundamental concepts of software quality including key components such as reliability, portability and maintenance. New mechanisms and techniques for inferring the overall quality and reliability of software systems are, therefore, needed.
In this article, we explore the use of artificial intelligence (AI) and machine learning (ML) for software quality assurance, which lead toward such mechanisms and how your organisation can benefit from it.
Improving Software Quality
At the most basic level, quality means how well some product performs in use. This is a customer’s perception and does not directly translate into engineering specifications. Thus, issues such as reliability, performance and usability play a crucial role in contributing to a system’s overall quality. Software quality is perhaps the most critical technological challenge of the 21st century. Since large software systems may have millions of variations, it is likely that assessing software quality through testing is an exhaustive exercise, hence requiring the use of AI to catalyse the process.
AI could be used in generating and optimising tests, reducing tedious analysis and using production data to track features to prioritise what to automate and what to test.
Automated Test Design
AI can interpret requirements (eg, user stories) and instantly generates a minimal number of test cases from the requirements to maximise risk coverage. However, this doesn’t seem to be feasible in the near future since test cases are currently derived from some sort of requirements that are expressed in natural language. As such, this would require AI to not only understand the individual bits and pieces of a requirement, but also understand its context (eg, business use cases), its meaning, and its relevance (eg, business impact) to estimate the risk contribution of that requirement. AI would also need to link these learnings to the application’s individual technical components in order to derive test cases from the requirements.
Nevertheless, the use of AI in automated test design is something to look out for in the near future.
Redundancy prevention is a big issue today. Identifying physically identical test cases is easy, but that alone doesn’t solve the problem. Spotting logically identical test cases is considerably more challenging. At the moment, it doesn’t seem to be economically reasonable to train a system to detect these logical identities without too much manual pre-configuration. To identify these redundancies, you need to appropriately flag the business relevance of test data (and test actions), then provision test data to the equivalence classes. Once this is accomplished, it’s trivial to eliminate redundancies—no learning system is required.
AI eliminates and/or prevents redundancies in existing test case portfolios to achieve the same results in terms of business risk coverage (but with less effort).
Risk Coverage Optimisation
AI finds optimal test sets to maximise business risk coverage and defect detection under the given time, resource and budget constraints to optimise test execution. This is undeniably a smart thing to do, but it doesn’t require learning systems. This goal can be achieved by traditional rule-based mathematical optimisation algorithms and involves maximising defect detection; minimizing costs by reducing the required resources (e.g. testers, machines); minimising execution time; minimising the number of test cases; and maximising risk coverage in a pre-defined timeframe (e.g. one day).
In order to enable this to work, it is necessary to know the probability that a certain test case will detect a defect of a certain severity. This probability can be approximated from past test runs without too much effort. Next, we need to know the risk contribution of each individual test case. This information is already available once a test case has been linked to a requirement (eg, user story). We also need to know the average execution time of each test case. This can be easily derived from past test runs. For newly added test cases, the time can be estimated based on the average execution time of test cases with a similar sequence of test actions. To minimise the costs, we just need to know how much a human tester or machine costs per unit time. This must only be configured once. To minimise the resources, we need to know what resources (human and machine) are available. This too must only be configured once.
Other use Cases
Some other possible use cases of AI in software testing that are available today/will be available in the distant future and can significantly improve ROI for your business are:
Portfolio Inspection: AI can track flaky and unused test cases as well as test cases not linked to requirements or untested requirements, etc. to show weak spots in test case portfolios.
FalsepPositive Detection: Artificial intelligence can also reduce the energy required for analysis of results by showing whether a failed test case actually detected a defect in the application or only failed as a result of technical defects with the test case in the first place.
Automated Exploratory Testing: AI can interact with the application, build a model of it, discover relevant functionality, reveal defects, and extract test cases to reduce regression test effort.
Automated Defect Diagnosis: One of the major benefits of using an AI is that it can propose potential reasons that caused a test case to fail in order to lessen the time and effort it would take to determine the primary cause of the fault.
User Experience Analysis: AI interprets user emotions during exploratory testing and links its findings back to the related application component to increase the precision of UX analysis.
Self-healing Automation: Finally, AI can heal broken automated test cases by updating the controls (e.g. buttons) and its properties (e.g. ID) at runtime to make test automation more resilient to changes.
Software systems are the most complex technological systems in the world today, and the most ephemeral. They exist as pure information, without any physical component. Software has a reputation for being the most error-prone of all engineering constructs, and yet it is an essential element of the infrastructure and economy. This crisis in software quality is probably the most urgent technological challenge of the 21st century. AI Testing can significantly contribute to this domain by improving the efficiency of testing processes and increasing marginal returns for businesses.