Trust, but verify.

Ronald Reagan

What is Regression testing?

Regression testing verifies whether a previously developed and tested software system still performs after changes. If the software has bugs unrelated to the recent change, it is said to have regressed. In addition to code modifications, changes that may require regression testing include changes to its configuration, libraries, software, and even electronic elements.

Regression testing can be manual, automated, or a combination of both and consists of running both functional and non-functional tests.

The principal goal of regression testing is to ensure that previously eradicated bugs remain dead in the new build.

Smoke testing and sanity testing are types of regression testing.

Why do Regression Testing?

More often than not, many disparate features share some underlying code. Just looking at the change from an end-user perspective is not sufficient to determine what all is affected. Merely looking at the modified code is also not enough to grasp its effects.

Let's take an example.

Developers slightly changed the layout of the product screen in an e-commerce application.

The testers assumed that this change is unlikely to affect other features like search, product catalog, shopping cart, and billing. They verified that the product page worked correctly for different products on various browsers and devices and gave the green signal for its release.

But there was a problem.

The developer had modified the abbreviate function that abbreviates a long string to shorten its input to 40 characters instead of 80. The product screen worked fine. However, the shopping cart screen also used the same function and would not display some crucial details because of this change. Checkouts dropped by 25%, and the website lost revenue.

Regression testing would have tested the shopping cart screen, and the website could have easily avoided this loss.

Let's take another example that is more subtle and dangerous and highlights the importance of running non-functional tests.

An online bidding company decided to move from a commercial database to a popular, free, open-source database to lower costs.

After development, the QA team thoroughly tested all the features. Everything worked, and the software was released.

The software choked. Users couldn't place bids in time.

The QA team had only run functional tests since they thought performance testing was unnecessary based on the new database's benchmark numbers.

While these benchmark figures were correct, their code was heavily dependent on row locking, which was not the new database's strong point.

They had to revert and were set back by many months.

Selecting Test Cases for Regression Testing

Today, regression testing has become critical as software releases happen frequently, and it is not feasible to test the entire application thoroughly every time. Therefore, it is vital to select regression tests that can be quick to run yet high in coverage.

A good selection in regression testing consists of:

  • Test cases related to recurrent defects
  • Test cases of functionalities that are evident to end-users
  • Test cases of functionalities that have undergone recent changes
  • All integration test cases
  • All complex test cases
  • Boundary value test cases
  • A sample of negative test cases