A Crash Course on CAST’s New CRASH Report

Last week, CAST issued a report on the summary findings of its second annual CAST Report on Application Software Health (aka CRASH), which delves into the structural quality of business application software. The report has earned significant coverage throughout the technology media, including InformationWeek, InfoWorld and Computerworld, as well as the Wall Street Journal.

What resonates with the press seems to be two issues. First is the issue of technical debt. With financial debt being such a hot-button issue around the globe, the idea that companies are spending millions to fix errors in application software that should have been fixed during pre-deployment phases of production highlights that business must fundamentally change if companies are going to be able to spend money on increased innovation rather than on maintenance. Second is this year’s stats on Java, which show that Java developed applications carry the greatest amount of technical debt and also score low in performance.

While these are the items that the media has chosen to highlight, I thought it would be a good idea to go through and focus on the findings a bit more in depth, but not in quite as much depth as the full CRASH report will do when it is released in 2012.

Breaking Down CRASH

CAST’s sample data for this year’s report was almost three times the size of the 2010 edition – 745 applications versus 288, and representing 365 million lines of code versus 108 million. Applications assessed this year range from ten thousand lines of code (KLOC) to more than 11 million of lines of code (MLOC). The 160 organizations that submitted applications represent ten industry segments, with the largest representation in financial services, telecommunications and IT consulting. Data came in from the U.S., Europe and India. The data are maintained by CAST in Appmarq, the world’s leading structural quality benchmarking repository.

The results from this year’s report were divided into three parts: the first confirms last year’s findings with new insights; Part II offers new insights from this year’s greatly expanded sample, while the third part dives into a discussion of technical debt. For now, I’d like to review the first part of the report and the findings from last year that CAST confirmed.

Confirming CRASH

Although this year’s report looked at a significantly greater number of applications and lines of code from a far greater number of companies, it is interesting to note how many of the results remained consistent with the 2010 report (which was then called the Worldwide Application Software Quality Study). One of those confirmed findings is the one that many in the media – developer media in particular – have latched onto in their headlines.

Just under half (46 percent) represented Java programs; the other four most frequently seen technologies included mixed, COBOL, ABAP and .NET. This year’s sample size of Java apps was significantly larger than last year’s. Nevertheless, it was interesting to note that two things CAST discovered about Java-developed apps last year were confirmed in his year’s larger sample. The first was that performance scores for Java were lower than any other technology assessed and the other was that Java apps carry the largest amount of technical debt.

In fact, the technical debt for Java apps was so much higher than the rest of the apps that it may have skewed the average technical debt per line of code slightly. Whereas this year’s CRASH report found an average technical debt of $3.61, if you break out only the Java apps, that number jumps to $5.42 per line of code for the Java apps alone.

The other area where Java scored low was in performance; lower than any of the other technologies assessed. In fact, on a scale from 1-4 with 1 representing an application at high risk for performance issues and 4 being at low risk, Java was the only technology to score below a 3 in the CRASH study. One hypothesis offered by CAST for this is the potential for the existence of inefficient code, since modern development languages, such as Java, are more flexible and give developers the opportunity to violate good performance practices. The rest of the technologies, including .NET and COBOL, had scores in the 3.2 to 3.5 range.

Another confirmed finding from last year was the security rankings for COBOL and .NET. The study combined the security scores of all ten technologies which yielded a bimodal distribution, leading to the conclusion that applications fall into two distinct categories: One with very high scores and a second with moderate scores with a long tail toward poor scores.

COBOL applications, which today are generally written for the financial services and insurance industries – two markets where high security to protect confidential information is required – continued to dominate the higher security scores. Again, using the 1/high-risk to 4/low-risk scale, COBOL was the only technology to score above a 3.0 in security (~3.7). It was noted in the report that since COBOL applications run in mainframe environments they are less exposed to Internet-based security challenges, which could boost its result.

Also for the second year in a row, .NET applications brought up the rear among those technologies with poor security scores. Most technologies except COBOL and .NET were grouped in or around a score of 2.5, .NET was the only technology whose security score approached a 2.0 representing moderately high risk. It was somewhat surprising that security scores for the other technologies are lower. The report posits that developers pay less attention to security where it is not mandated by government regulation.

Another confirmation came in the area of application quality in larger applications. Once again, CAST’s results contraindicate the common belief that the quality of an application degrades as it becomes larger.  The Total Quality Index (TQI) – a composite of the five quality characteristic scores (transferability, changeability, robustness, performance and security) – did not correlate with the size of application…in most cases. The theory offered here by CAST is that newer languages encourage modularity and other practices that control the amount of complexity added as an application grows larger.

The lone exception to this again seems to be COBOL. When CAST broke down the sample by technology, it did see that, among larger COBOL applications, there is a tendency for a lower TQI score. Moreover, the study identified the percentage of highly complex objects in applications breaking them down by technology. Generally speaking, the more complex objects are, the greater the potential for quality issues arising, particularly around changeability (i.e., maintenance) and transferability (i.e., ease of change or customization). COBOL had far and away more highly complex objects – with a median score in the 65% range – while none of the other technologies had as much as 20%, with most showing medians of 15% or lower.

The final confirmation of last year’s findings revolved around transferability and changeability, critical components of an application’s cost of ownership. This year as last year, when compared by industry segment, transferability and changeability scores for apps used by government agencies were lower than for other segments, although government’s changeability scores were not significantly lower than other industries.

Nevertheless, government’s poor showing in transferability and changeability indicates that IT applications used by government agencies are more difficult to maintain than in other industries. While real cost data was not part of this report, it stands to reason that government agencies are spending significantly more of their IT budgets on maintaining existing applications than creating new ones. This is a hypothesis supported by the Gartner 2010 IT Staffing & Spending Report, which noted the government sector spends about 73% of its budget on maintenance, higher than any other segment.

I welcome your feedback about these confirmed elements of the CRASH report. Stay tuned for future posts about the report’s new findings and its exposure of the increasing seriousness of technical debt in IT applications.

Jonathan Bloom

Jonathan is an experienced writer with over 20 years writing about the Technology industry. Jon has written more than 750 journal and magazine articles, blogs and other materials that have been published throughout the U.S. and Canada. He has expertise in a wide-range of subjects within the IT industry including software development, enterprise software, mobile, database, security, BI, SaaS/Cloud, Health Care IT and Sustainable Technology. Jon holds a B.A. in History from Gettysburg College. He enjoys attending sporting events, cooking, studying American history and listening to Bruce Springsteen music.

More Posts

Get Your Free White Paper And Learn How Software Analysis Can Help Your Business

Learn why you need to build security into your applications and how it will help improve and protect your business. Click the button below to get our FREE copy today.

Your Information will be kept private and secure.

Comments

comments