Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized... Membership! Membership!

Tweet Register as an member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Incorporating AI and Machine Learning into QA

February 25, 2021 No Comments

Featured article by Erik Fogg, Co-Founder and Chief Operating Officer at ProdPerfect


The evolution of artificial intelligence and machine learning (AI & ML) has had an enormous impact on the way we develop and test software, but this impact has been unevenly distributed. ¹Studies show that very few companies have incorporated AI/ML into their business processes, and those that have are typically larger companies. This may be due to the perception that AI/ML processes are difficult to incorporate or that companies feel they do not have the necessary in-house expertise. However, there are many benefits to incorporating AI/ML into development functions, such as software quality assurance (QA), and those benefits are not restricted to a few large corporations. As algorithmic processes become more prevalent in all types of software, the need to adapt to keep up with these changes in QA becomes apparent, and the adoption of ML into QA seems an inevitability. Companies that adopt these processes find their tests provide greater value, take less time to run, and adapt to changes in consumer behavior much quicker. Those who successfully adopt AI and ML tools and services earlier will achieve a substantial comparative advantage.

The merits of ML in software testing

Many companies may claim to understand the importance of software testing and QA, but in practice, it’s a step in the development life cycle that often gets overlooked or underappreciated. As recently as 2018, 75% of teams were not keeping testing in lockstep with software development. That means that many companies are deploying to production environments code that has not been properly tested. This opens up businesses to many different risks, from mundane bugs and glitches to more serious issues, such as leaking sensitive information. ML can help dramatically improve the ease and efficacy of adding strong QA testing into the software development lifecycle (SDLC).

Incorporating ML into QA processes does not involve a wholesale change into how testing is conducted; it’s something of a misunderstanding that algorithms are replacing the need for human testers. For many companies, large and small, ML can be considered an augmentation of their existing testing process. As much as 42% of companies still test entirely manually. Human factors can result in holes in your test process, whether test cases were not properly considered, were forgotten, or were skipped to try to get things out the door on time. It’s also very possible for test suites to become unwieldy, as testers may not know which tests provide the most value, so they attempt to cover everything. Test runtime is extremely expensive, both in terms of time and money. 25% of software budgets are dedicated to testing, so you should be sure you are getting your money’s worth.

Intelligent test suites can identify what needs testing, automatically generate test cases, and monitor testing processes to provide information and insights into your software product or web application that can cut down on testing runtime and help your manual testers work more productively in the same amount of time.

What ML is best suited for in QA

One of the biggest advantages of utilizing ML processes is that they can prioritize where tests are most beneficial and adapt to ambiguous outputs. As software becomes more complex, expected outputs are not always well-defined or may not be known ahead of time. Modern techniques, such as predictive analytics, can make adjustments to underlying algorithms in production environments in order to anticipate emerging needs in customer bases and adapt to changes based on feedback. For example, an airline may choose to adopt a demand-based pricing model that attempts to find the best balance between flight time, seat availability, and profitability. There is no ‘exact’ result, and a maximum optimization value may not even be mathematically possible. How would this be tested using manual QA processes?

In these scenarios, tests require well-defined acceptance criteria instead of expected outputs or defined defect numbers. Acceptability is expressed as a statistical likelihood of the returned value coming within a defined range. These test scenarios require well-crafted data models, which is still the responsibility of human testers. In cases like these, the value of ML as an augmentation of testing becomes apparent. Rather than defining and creating all test cases manually; instead, testers define acceptance criteria and create data models that are used to train and adapt ML algorithms. Human testers are particularly valuable early on in exploratory test scenarios when fewer hard data are available to work with.

Ways ML can streamline the testing process

Automated test generation, continuous testing, and maintenance allow QA processes to quickly adapt to changes in underlying codebases, which may see multiple commits and pushes to production branches on a weekly or even daily basis. UI testing, in particular, is a pain point for many test suites, as it is especially susceptible to rapid changes that require thorough test maintenance to stay relevant. Here, the power of self-healing tests becomes valuable in keeping test maintenance time to a minimum without slowing down development.

Rather than identifying objects based on unique IDs or tags, instead, recognition criteria build a model based on property-value pairs to identify each unique object. Without self-healing tests, when a unique identifier is changed or removed, tests must be manually updated. In software with many thousands of UI test cases, the time saved cannot be underestimated. This, combined with automated visual validation testing, allows correctness to be tested both in code and in the generated output, further compounding the time saved and streamlining the testing process in an area that otherwise requires a large amount of manual effort to maintain.

ML and the future of QA testing

Predicting the unpredictable looks set to be the next major challenge facing QA testing. As ML algorithms become more and more commonplace in all sorts of industries and software products, having expected return values that are known ahead of time becomes less common, and this presents a challenge for traditional QA processes.

Cognitive QA uses smart analytics and intelligent test automation to make decisions about QA based on customer behavior and feedback. Relying on smart QA processes eliminates subconscious human bias and emotion from data sets and analyses. Incorporating user behavior into test data allows for tests to be developed where they are most needed with automatic test scenario selection based on the maximum return on investment. When QA runtime is so valuable, knowing where your testing efforts are best focused is extremely advantageous.

All of this is working towards ML testing singularity, where fully automated testing processes are used to test automatically generated code. While this is still not yet possible, the benefits ML processes bring to QA testing are simply too valuable to be ignored. The testing singularity is less a possibility and more an inevitability, but it will be built on the back of human testers working with algorithms to create tests that are more precise, more efficient, and more valuable than before.


Erik Fogg is a Co-Founder and Chief Operating Officer at ProdPerfect, an autonomous E2E regression testing solution which leverages data from live user behavior



Sorry, the comment form is closed at this time.