Human beings are an odd animal. We’re the only animal that experiences embarrassment over mistakes; some say we’re the only animal that realizes we make them. We also run a full gamut of emotions when we make mistakes – from frustration and self-deprecation to humor and acceptance.
Mistakes are so prevalent among humans that they’ve become ingrained in our culture. In music, “Old Blue Eyes” Frank Sinatra sings, “Regrets? I’ve had a few, but then again, too few to mention,” while in more modern times we’ve heard Billy Joel croon, “You’re only human, you’re allowed to make your share of mistakes.”
If mistakes are a sign of humanity, then software developers most definitely qualify as human beings (not that I ever doubted they would). From the novice programmer to the senior engineer, from the best to the worst, all developers are apt to err now and then. The question is when they do, will they catch the error immediately, go back and fix it or will that error remain and be a thorn in the side of the software on which they are working?
To Err is Human
Almost invariably, nearly every programmer will wind up treating an error in each of the ways mentioned above. Some are easier to detect than others and can be caught right away or eventually seen with the naked eye and fixed on review. The real problem facing software development is the error that, because it is so out of the ordinary, goes undetected.
In an article posted on CodeGuru, Andrey Karpov discusses how to make “fewer errors at the code writing stage.” His article offers five quick tips of how to avoid errors, but for a sixth point, he admits that not all coding errors are detectable:
“For many errors, there are no recommendations on how to avoid them. They are most often misprints both novices and skillful programmers make…However, many of these errors can be detected at the stage of code writing already, first of all with the help of the compiler and then with the help of static code analyzers’ reports after night runs.”
Again, static analysis rather than testing is credited as the key to detecting errors indiscernible to the naked eye.
As Capers Jones discusses in his 2009 book, Applied Software Measurement, “In terms of defect removal, testing alone has never been sufficient to ensure high quality levels.” He adds the following figures to bolster his stance:
- Testing by itself is time consuming and not very efficient. Most forms of testing only find about 35% of the bugs that are present.
- Static analysis prior to testing is very quick and about 85% efficient. As a result, when testing starts there are so few bugs present that testing schedules drop by perhaps 50%. Static analysis will also find some structural defects not usually found by testing.
- Formal inspections of requirement and design are beneficial too. Formal inspections create better documents for test case creation, and as a result improve testing efficiency by at least 5% per test stage.
- A synergistic combination of inspections, static analysis and formal testing can top 96% in defect removal efficiency on average, and 99% in a few cases. Better, the overall schedules will be shorter than testing alone.
- The average for a combination of six kinds of testing – unit test, function test, regression test, performance test, system test and acceptance test – without preliminary static analysis is only about 85%.
Quicker, more complete and more efficient code analysis? It certainly sounds like employing static analysis is the way to go. They should be more proactive about software errors and address them through static analysis and the best, most efficient way to do that is by employing a platform of automated analysis and measurement during the build phases of each project rather than relying upon a manual review by IT staff.
After all, they’re only human!