AI-Driven Testing: A New Era of Test Automation

Tariq King, Director, Quality Engineering, Ultimate Software
Tariq King, Director, Quality Engineering, Ultimate Software

Tariq King, Director, Quality Engineering, Ultimate Software

Test automation as we know it today is nothing more than an illusion, created in the absence of technological software testing innovation. If you don’t believe me just look up the definition of automation in the dictionary, or compare the automation practices in software testing with those of other disciplines. Automation is the process of making a machine or system work without being directly controlled by a human. When something is automated, it becomes capable of starting, operating, and completing independently. In manufacturing, as far back as the mid 1950’s, ideas around automation led to the notion of lights out manufacturing — production work being done with the factory lights out because it is not reliant on continual human presence. Unfortunately, our current view of automation in software testing is far from being a lights out philosophy.

“Automation as it is defined in software testing is neither very automated, nor has much to do with testing.”

When testers talk about automation, they are referring to manually encoding a predefined set of program input actions and output verification steps into a script, which can in turn be executed by a machine. Upon execution, a log of the results is generated, stored, and associated with the run. Clearly, the only aspects of this process that are truly automated are test execution and logging. Human testers are needed to define testing goals, acquire the knowledge necessary to adequately test the software, design and specify detailed test scenarios, write the test automation scripts, execute scenarios that could not be automated, and analyze the test results to determine any threats to the project. Not only does this activity require too much manual effort to be called automated, but it is too granular to be classified holistically as testing. Software testing expert Dr. Cem Kaner once described a test as a question you ask a program to gain information about it. The narrower the scope of the question, the more limited the investigation around it, and the less you are likely to learn from its answer. Along similar lines, James Bach and Michael Bolton distinguish between testing and checking.

• Testing is evaluating a product by learning about it through experimentation, which includes questioning, studying, modeling, observing and making inferences.

• Checking is making evaluations by applying algorithmic decision rules to specific observations of a product.

 Traditionally machines have been programmed to follow explicit instructions. Humans, on the other hand, learn a lot through observation and experience 

Simply put, a test is an experiment, and although checking specific facts about a program may be a part of that experiment—testing is a lot more than fact checking. Whether or not you agree with the distinction between testing and checking, current test automation practices are limited. Automated test scripts tend to only target functional, structural and performance issues. Testing concerns like usability and accessibility are often deemed too difficult to automate. On the bright side, we are entering a new era where artificial intelligence (AI) and machine learning (ML) are redefining the meaning of automated testing.

“If testing is about evaluating a product by learning about it through experimentation, then automated testing is about having machines perform those activities instead of humans.”

Traditionally machines have been programmed to follow explicit instructions. Humans, on the other hand, learn a lot through observation and experience. AI and ML allow computers to learn like humans by representing past observations and experiences as data. Instead of hard-coding knowledge and task-specific instructions, learning algorithms are trained using concrete examples of the concepts the machine needs to recognize. Modern applications of AI include image and voice recognition, e-mail spam filtering, credit card fraud detection, medical diagnosis, gaming, and self-driving vehicles. Researchers and practitioners have recognized the potential for advances in AI to help bridge the gap between human and machine-driven testing. As a result, a new wave of AI-based test automation tools is already being developed. Such tools leverage autonomous and intelligent agents, commonly referred to as bots, to automatically drive the testing process.

“Test bots are designed to mimic how human testers observe, explore, model, reason, and learn about a software product.”

So what are some of the ways in which AI can imitate human testers? Here are seven AI-driven testing capabilities that exist today:

1. Discovering Application Structure: Like humans, test bots can perceive the different screens and widgets in an application and classify them correctly, even if details are missing or it is the first time the bot is examining the application under test.

2. Exploring Application Behavior: Test bots generate actions such as filling out a form on a screen, clicking submit, and checking for an appropriate response. The likelihood of a bot performing a given exploratory action may be based on historical data collected from human-present software testing sessions.

3. Modeling and Reasoning: To perform their functions, test bots construct models of the application under test or of the different testing activities. They use these models to make decisions in the presence of uncertainty, or reason about the quality of their own actions and observations.

4. Detecting Failures and System Changes: Test bots leverage image recognition and other techniques to determine when a failure occurs, or to detect legitimate changes in the current version of the application. AI has enabled visual UI test automation.

5. Learning from Tests or User Traces: Artifacts such as test scripts and execution traces contain concrete examples of interesting paths that human testers and end users cover when exploring a given application. Test bots trained on these real-world examples can generalize them to new applications.

6. Declarative or Goal-Based Testing: An impressive feature of AI-driven testing is the ability to specify a testing goal and have the bots automatically figure out how to achieve that goal. Intent may be specified in natural language or using an abstract testing language.

7. Adapting Testing: Test bots can modify their behavior at runtime based on feedback. This is generally achieved through an ML technique known as reinforcement learning. In reinforcement learning, positive outcomes are rewarded, and negative outcomes are punished, allowing the bots to improve over time.

It is important to mention that AI-driven testing is still in the infant stages with much work to be done. Testing challenges like the oracle, input and state explosion, test data selection, among others, still remain open research problems. Furthermore, most of the current AI-driven testing tools address software testing from an external, black-box perspective. However, a significant amount of software testing today happens from the inside using white-box approaches. Nonetheless, despite its infancy, challenges, and limitations, AI-driven testing is bringing some much-needed innovation and buzz to the industry. The age of the testing robot is upon us — the question is: Are you ready for the rise of the testing machines?

Read Also

Cloud Adoption-The Key to Business Success

Cloud Adoption-The Key to Business Success

Pankaj Sabnis, Principal Architect, Cloud Computing, Infogain
Software Quality in 2016: The State of the Art

Software Quality in 2016: The State of the Art

Capers Jones, VP & CTO, Namcook Analytics LLC
Onshore, Offshore, and Models for Testing Teams in Light of Recent Data Breaches

Onshore, Offshore, and Models for Testing Teams in Light of Recent Data Breaches

Jennifer Bonine, VP, Global Delivery and Solutions, tap|QA LLC
Shortcut Time-to-Market with Automated Code Testing

Shortcut Time-to-Market with Automated Code Testing

John Chang, Head of Solution Design, CAST