High Efficiency Defect Removal for Software Projects
Software quality depends upon two important variables. The first variable is that of “defect potentials” or the sum total of bugs likely to occur in requirements, architecture, design, code, documents, and “bad fixes” or new bugs in bug repairs. Defect potentials are measured using function points, since “lines of code” cannot deal with requirements and design defects. The second variable is “defect removal efficiency” (DRE) or the percentage of bugs found and eliminated before release of software to clients.
Defect potentials and defect removal efficiency (DRE) are useful quality metrics developed by IBM circa 1973 and widely used by technology companies.
Defect potentials are the sum total of bugs found in requirements, architecture, design, code, and other sources of error. The approximate U.S. average for defect potentials is shown in table 1.
Table 1: Average defect potentials circa 2015 for the United States average
• Requirements 1.00 defects per function point
• Architecture 0.30 defects per function point
• Design 1.15 defects per function point
• Code 1.35 defects per function point
• Security code flaws 0.25 security flaws per function point
• Documents 0.50 defects per function point
• Bad fixes 0.40 defects per function point
• Totals 4.95 defects per function point
Defect potentials are of necessity measured using function point metrics. The older “lines of code” metric cannot show requirements, architecture, and design defects not any other defect outside the code itself.
Defect potentials range from < 2.00 per function point for top teams and quality-strong methods to > 6.00 defects per function point for inexperienced teams and quality-weak methods.
Defect removal efficiency (DRE) is also a powerful and useful metric. Every important project should top 99 percent in DRE, but few do.
DRE is measured by keeping track of all bugs found internally during development, and comparing these to customer-reported bugs during the first 90 days of usage. If internal bugs found during development total 95 and customers report 5 bugs, DRE is 95 percent .
Table 2 shows approximate DRE values for common pre-test and test methods although there are variations for each method and also for the patterns of methods used.
To illustrate the principles of optimal defect prevention, pre-test removal, and test defect removal table 2 shows a sequence of pre-test and test stages that will top 99 percent in defect removal efficiency (DRE). Table 2 illustrates 1,000 function points and about 53,000 Java statements.
“Software quality depends upon two important variables— Defect Potentials and Bad Fixes”
DRE measures can be applied to any combination of pre-test and testing stages. The U.S. norm is to use static analysis before testing and six kinds of testing:
unit test, function test, regression test, performance test, system test, and acceptance test. This combination usually results in about 95 percent DRE.
In order to top 99 percent in DRE table 2 shows several forms of defect prevention and includes inspections as an important pre-test removal method. Formal inspections have the highest DRE of any known method, and over 50 years of empirical data.
Due to inspections, static analysis, and formal testing by certified test personnel, DRE for code defects can top 99.75 percent . It is harder to top 99 percent for requirements and design bugs since both resist testing and can only be found via inspections, or by text static analysis.
The combination of defect potential and defect removal efficiency (DRE) measures provide software engineering and quality personnel with powerful tools for predicting and measuring all forms of defect prevention and all forms of defect removal.