Testing Results and Predictions

Product testing and verification is not a simple endeavor, and predictions are not so easy. However, it is possible to glean something from key metrics that are readily available in the course of doing the work.

Testing is About Learning

There is more than one reason to build iterations and test constantly through the development of the product. However, one of the biggest reasons is that it gives us opportunities to learn about the product while developing rather than relegating that to the end of the project when we can do or predict very little. When you go on a long trip to a new place for the first time, do you check the route when you are close to your destination? My bet is no. We have the opportunity to learn, but do we take the opportunity? This learning will provide us with information allowing us to discern any trends in the development of the product.

Testing and Prediction

If we do not care to prognosticate accurately, we can choose to arbitrarily make a prediction. Experience suggests that is how determination of product ready for launch frequently happens. We have run out of time for testing, so it must be that the product is ready for the customer. We can just say pronounce or proclaim a prediction or we can go about testing with a methodical approach parallel with the product growth and monitor key attributes or test results. My personal preference is to remove as much of the arbitrary from the equation and do what I can to make a reasonable attempt at prediction. Sounds simple but this is not as easy as it sounds.

Development and Testing Arm and Arm

We have discussed in earlier posts how the product is developed in increments under control of the configuration management functions. Each one of these development loops provides us with the opportunity to learn via our testing. The package; software and or hardware, or system, depending upon the scope of the work, is set for the testing to start. We have set the stage, however, we are not ready to be able to predict just yet.

Test Tracking Reporting Systems

Common Tool

We will start by stipulating that we have generated test cases. Those test cases can be categorized in a variety of ways. In a later post we will write about these areas, for now, we will provide a list of these areas:

  • Requirements based (compliance) testing
  • Combinatorial testing
  • Extreme testing
  • Stochastic testing
  • Attack

We will have test cases associated with whatever our test approaches employed. The next step requires some form of test case result reporting system. Excel can work, but there are problems with excel, and that problem is recording testing results and updating in a coordinated, cohesive and visible to the team and project members. By the way that control would likely fall under configuration management control. Distributing testing, some are responsible for some tests; others are responsible for other test execution specifically, requires we collect all of the test case information.

Common Severity Definition

Our team should have common severity view when evaluation the test results, and the tracking tool should support that. For example; we set a definition for the assessment of the faults we will find. Not all bugs or faults found in the product are equal or require equal treatment. Knowing this provides us with mechanisms for prioritization and more importantly, informs the team, project and management of the present quality of the product. An example of prioritization is provided below:

  • Fault not noticeable – 1 point
  • Customer perception – 10 points
  • Product feature fails – 50 points
  • Damage to people or material – 100 points

The higher the point value, the more severe the defect, the more severe the defect the larger implication on the product use. With the tool and evaluation structure in place we are ready to perform the tests and record the results. The recorded test results will provide us with the evidence to draw some inference about the product quality.

Test Execution

Now our team will begin testing the product. For the sake of simplicity, we start by assuming total contents of the proposed product are delivered in a single delivery. The test team will execute the test cases to ascertain the quality of the product in as tangible a way as possible to understand the failure impact on the product and ensure quick corrections.

Consider the following example. We have been conducting our tests on the product or in this case a vehicle system. We have made it through a portion of that testing, executing some fraction of test cases (roughly 20% of the total test cases). In that 20% of the testing, we have found at least one failure of the highest severity rating (100 points). Do you think there are no other severe problems in the system? Well, at this point it is difficult to say. However, if you were to know that the remaining tests are not regression, but part of the new development work, that may help in the decision. Let’s also add that the test cases priorities are homogeneous through the distribution of the test cases conducted. That is we are not conducting our tests only on those requirements that can evoke a failure of the 50 and100 point magnitude (for example a defect in paint would never be 100 points).

testing-results

You never really know, however, since the failure is found in newly developed work, and the remaining work is also new, it certainly is probable that there are more failures. Perhaps even one, two or even more of these high severity failures are in the remaining areas to be tested. Given there is one known severe failure and testing is only 20% complete, we are still obliged to continue the exploration. This product is not ready to go.

Let’s look at this from a different perspective. In this case, we have a series of iterations delivered to our test department for testing. The same severity scale is used. In this case we see the three iterations delivered to testing. As you might expect, the first iteration has a few failures reported across the severity range. The second iteration there are fewer failures as the development group corrects those found by the testing group. As we progress through the testing, we see our defects increase and decrease making it very difficult to predict the future. Additionally, we see the number of severe faults also fluctuates. There can be many reasons for this. If new content is added for each of the iterations, we can expect to see new bugs. If at some point in these iterations amount to bug fixes, we can suspect that our software handling or build process may be compromised. In fact, if we are seeing repeat failures, bugs that were found in any iteration closed in another only to reappear, that is almost a sure sign of a build or change or configuration management problem.

Bugs-Found-per-iteration

In this next case, we are looking at the testing of a very large system, more than 3000 test cases. The blue line is the target rate of completion. The red line is the actual rate of completion and the bars are failures reported. The green bar denotes the total number of failures reported for that day. The purple bars are the failures that are deemed very severe (50 and 100 points).

Large-Projects-Testing

Looking at it another example, we see the total failures found in each package of testing a steady decrease in the number of failures reported. So much so that a linear extrapolation of the next package predicts the next package will likely be in a state that is perhaps “launch-able”. The caveat being that any failure found in the next iteration (package 4) would not produce a peculiarity of a dangerous severity.

Fault-Reports

We can learn much from paying attention to the defects, the type, severity and rate of discovery. Some of the things you uncover may not have anything to do with the verification but are more related to documentation control, change control and perhaps build management. Knowing this is important because verification will only point to some of these problems but has little way of correcting. Take this opportunity to learn about the product, and learn about the processes that deliver the product.

About the Author

Jon-Quigley

Jon M. Quigley PMP CTFL is a principal and founding member of Value Transformation, a product development training and cost improvement organization established in 2009.  He holds multiple degrees, has authored and contributed to numerous books and magazine articles as well as teaching and speaking at technical schools, universities and conferences on a variety of business and product development topics. You can find Jon on Twitter at @JonMQuigley.

Comments

  1. Great article! Always interesting to understand the approach of testing an iteration and encountering defects early on and whether that can then be used to gauge the rest of the testing cycle. Thanks for the insight!

  2. Michael,

    There are limits, but if we do not look at these trends we are doing nothing but guessing. If every iteration is new content, the predictions may be less valid but still interesting and provide some rationale for making a decision to release. Otherwise those decisions to release the software are based arbitrarily on date, or hope or our “gut” feel.

Speak Your Mind

*