Whether it’s in sports, medicine, music or even a military operation, I’m a firm believer in the “best man for the job” concept. This is why Agile, or more specifically, Scrum development, sounds to me like a smart play for an organization.
But even with the “best man” on the job, the process needs to be planned out from the beginning, with defined goals and roles in place in order to ensure that the quality of the software being developed does not take second seat to the speed with which it is developed – speed that can lead to oversights, omissions and errors. In addition, there needs to be stringent checks and balances that ensures each scrum has performed its duties proficiently and that the interfaces between each of the scrum-built pieces do not bring down otherwise well-written code.
I’m not the only one who feels strongly about this last point. Recently over at Agile.DZone.com, Daniel Ackerson commented on the importance of software testing. He says the longer a company waits to test its software, the worse it gets, noting:
“The later you test, the more effort you’ve got to spend fixing bugs introduced weeks ago. And as the code is changing during the testing weeks, every test cycle you do has to be repeated. In the end, your software is no more stable then it was before the test cycle.”
He’s right. Scrums cannot wait until the end of the development process to test because problems beget problems. However, Ackerman’s solution still is not enough. He says, “The only way to support a rapid cadence of releases is to automate testing.”
Unfortunately, even automated testing of Agile developed software is not enough to ensure software quality…in fact, waiting until the software is ready to be tested is too late to be completely effective.
Issues with Agile
The beauty of Agile developed software is also part of the reason why it is hard to ensure optimal application software quality. Bits and pieces of functionality that will eventually become interdependent are created and tested separately in different scrums. New functionality is often added on top of old, which further muddies the architectural waters, threatens reliability and performance, and increases the cost to modify and maintain the software. Moreover, as the number of lines of code grows, architectural complexity grows exponentially.
At this point, performance bottlenecks and structural quality lapses become very hard to detect. This makes it very difficult to see and measure the structural quality. Being able to find and fix critical architectural bottlenecks in a rapidly evolving code base reliably is the key to developing high-quality applications using Agile techniques.
Cannot Live by Testing Alone
In his 2009 book, “Applied Software Measurement,” Jones wrote, “In terms of defect removal, testing alone has never been sufficient to ensure high quality levels.” To back up this statement he has some compelling statistics:
- Testing by itself is time consuming and not very efficient. Most forms of testing only find about 35% of the bugs that are present.
- Static analysis prior to testing is very quick and about 85% efficient. As a result, when testing starts there are so few bugs present that testing schedules drop by perhaps 50%. Static analysis will also find some structural defects not usually found by testing.
- Formal inspections of requirement and design are beneficial too. Formal inspections create better documents for test case creation, and as a result improve testing efficiency by at least 5% per test stage.
- A synergistic combination of inspections, static analysis and formal testing can top 96% in defect removal efficiency on average, and 99% in a few cases. Better, the overall schedules will be shorter than testing alone.
- The average for a combination of six kinds of testing – unit test, function test, regression test, performance test, system test and acceptance test – without preliminary static analysis is only about 85%.
So Ackerson is correct in one respect, waiting until the end of an Agile cycle is not the best way to catch defects. However, merely testing doesn’t work, either. There needs to be a combination of testing and static analysis, which as noted can catch 96% or more of defects.
So for Agile and other forms of Scrum-based development designed to speed the process, a system of automated analysis and measurement needs to be employed. This provides comprehensive visibility over component interconnections and assesses the structural quality of each scrum-built component as well as the application software as a whole.
To answer the question above, it is possible to guarantee top-quality software developed in Agile, but only if automated analysis and measurement is used to assess the application software during the build process and then automated testing is employed to hone the quality further.