Keep an eye on legacy apps, COBOL’s not dead!

Third-generation programming languages (3GL) like COBOL or PL/1 are seen as outdated languages for “has-been” developers, and do not interest new ones anymore (there were even predictions saying that COBOL was going to die in mid-term.) These new developers prefer more modern technologies, like J2EE or .NET, and, worryingly, educational organizations provide few learning opportunities for 3GLs.

Crash Course on CRASH Report, part 3: Technical Debt

Money isn’t everything…yeah, right!
There are few, if any, who are so idealistic in this world that they will actually believe money isn’t everything. It doesn’t matter if it’s the scheduled time for a television show or a high-level decision to produce a controversial product, the motivation is money.
Need more convincing? Look at it from a “life imitates art” point of view – what is the most prevalent premise behind most TV shows and movies? Money…either the quest for it or the painstaking process of deciding to set it aside for other interests (e.g., love and family). While some will say the latter proves that money isn’t everything, there wouldn’t be a struggle over such a decision if it wasn’t mightily important.
This is why of all the things identified in December’s CAST Report on Application Software Health – aka the CRASH report – the findings on the state of technical debt being accrued by companies worldwide is the most compelling argument to get the structural quality of application software in check. In this third and final installment of my deeper look into the CRASH report (previous installments looked at “Confirmed Findings” and “New Insights“), I’ll focus on what it reported about the technical debt in business applications.
Show me the Money
As I’ve previously stated, technical debt is the cost incurred by a company to resolve issues with applications that were not addressed prior to the rollout of the software. What this essentially means is technical debt is money that did not have to be spent.
Technical is a term that’s been around for quite some time, but it did not truly become a marquis concern until 2010 when Gartner’s Andy Kyte reported that technical debt is quickly closing in on the $1 trillion mark – a level he predicted would be reached by 2015.
As with CAST’s 2010 report on software quality, the 2011 CRASH report looks at technical debt on a smaller basis – per application. Nevertheless, it offers a grim tale of technical debt being accrued by companies.
This year’s study, which analyzed and measured the structural quality of 365 million lines of code within 745 IT applications used by 160 companies throughout 10 industries, determined that technical debt has grown to $3.61 per line of code. That means even small to medium sized applications that run about 300,000 lines of code surpass the million-dollar mark in technical debt. Moreover, as CAST Chief Scientist Dr. Bill Curtis pointed out, “A significant number of applications examined in the study – nearly 15% – had over a million lines of code, which means even the smallest of those contains over $3.6 million in technical debt.”
A good portion of the increase in technical debt per line of code in the 2011 report versus the 2010 report from CAST (which found $2.82 per line of code) was the inclusion of more Java applications in the more recent study. In a previous iteration of this look at the CRASH report I noted that Java applications were found to have a significantly lower Total Quality Index (TQI) score than other platforms; in fact, Java was the only platform that had a median TQI lower than 3.0.
It should come as no surprise, then that Java applications also carried the highest amount of technical debt – $5.42 per line of code as compared to COBOL (the lowest technical debt per line of code), which carried only $1.26 per line of code.
The Color of Money
With figures topping the $1 million mark or more on average for technical debt, businesses should be taking notice. What is somewhat unfortunate for these businesses, however, is that the CRASH report also found that much of the technical debt being accrued does not appear in the dependability, security or performance of applications, but rather in the transferability and changeability – i.e., the maintenance – of application software.
These health characteristics of application software tend to receive less attention than front facing issues of performance and security because they are not the things that prevent customers from purchasing and using an application. But the CRASH report determined that 70% of accrued technical debt results from poor quality in terms of changeability (the ease with which an application can be changed or adapted) and transferability (the ability for others to change or customize code in an application).  These are the areas which cost companies money – money that would otherwise be used to bolster innovation.
Measuring and analyzing the quality and complexity of projects through automated solutions would contribute greatly to reducing technical debt and should be incorporated into the planning and development process. Spending a little money in the preproduction process sure beats paying a lot of money to fix issues after deployment.
And at the end of the day, it’s all about the money, bread, bucks, clams, dough, greenbacks, loot, moolah, gelt…
 

A Crash Course on CAST’s New CRASH Report

Last week, CAST issued a report on the summary findings of its second annual CAST Report on Application Software Health (aka CRASH), which delves into the structural quality of business application software. The report has earned significant coverage throughout the technology media, including InformationWeek, InfoWorld and Computerworld, as well as the Wall Street Journal.
What resonates with the press seems to be two issues. First is the issue of technical debt. With financial debt being such a hot-button issue around the globe, the idea that companies are spending millions to fix errors in application software that should have been fixed during pre-deployment phases of production highlights that business must fundamentally change if companies are going to be able to spend money on increased innovation rather than on maintenance. Second is this year’s stats on Java, which show that Java developed applications carry the greatest amount of technical debt and also score low in performance.
While these are the items that the media has chosen to highlight, I thought it would be a good idea to go through and focus on the findings a bit more in depth, but not in quite as much depth as the full CRASH report will do when it is released in 2012.
Breaking Down CRASH
CAST’s sample data for this year’s report was almost three times the size of the 2010 edition – 745 applications versus 288, and representing 365 million lines of code versus 108 million. Applications assessed this year range from ten thousand lines of code (KLOC) to more than 11 million of lines of code (MLOC). The 160 organizations that submitted applications represent ten industry segments, with the largest representation in financial services, telecommunications and IT consulting. Data came in from the U.S., Europe and India. The data are maintained by CAST in Appmarq, the world’s leading structural quality benchmarking repository.
The results from this year’s report were divided into three parts: the first confirms last year’s findings with new insights; Part II offers new insights from this year’s greatly expanded sample, while the third part dives into a discussion of technical debt. For now, I’d like to review the first part of the report and the findings from last year that CAST confirmed.
Confirming CRASH

Although this year’s report looked at a significantly greater number of applications and lines of code from a far greater number of companies, it is interesting to note how many of the results remained consistent with the 2010 report (which was then called the Worldwide Application Software Quality Study). One of those confirmed findings is the one that many in the media – developer media in particular – have latched onto in their headlines.
Just under half (46 percent) represented Java programs; the other four most frequently seen technologies included mixed, COBOL, ABAP and .NET. This year’s sample size of Java apps was significantly larger than last year’s. Nevertheless, it was interesting to note that two things CAST discovered about Java-developed apps last year were confirmed in his year’s larger sample. The first was that performance scores for Java were lower than any other technology assessed and the other was that Java apps carry the largest amount of technical debt.
In fact, the technical debt for Java apps was so much higher than the rest of the apps that it may have skewed the average technical debt per line of code slightly. Whereas this year’s CRASH report found an average technical debt of $3.61, if you break out only the Java apps, that number jumps to $5.42 per line of code for the Java apps alone.
The other area where Java scored low was in performance; lower than any of the other technologies assessed. In fact, on a scale from 1-4 with 1 representing an application at high risk for performance issues and 4 being at low risk, Java was the only technology to score below a 3 in the CRASH study. One hypothesis offered by CAST for this is the potential for the existence of inefficient code, since modern development languages, such as Java, are more flexible and give developers the opportunity to violate good performance practices. The rest of the technologies, including .NET and COBOL, had scores in the 3.2 to 3.5 range.
Another confirmed finding from last year was the security rankings for COBOL and .NET. The study combined the security scores of all ten technologies which yielded a bimodal distribution, leading to the conclusion that applications fall into two distinct categories: One with very high scores and a second with moderate scores with a long tail toward poor scores.
COBOL applications, which today are generally written for the financial services and insurance industries – two markets where high security to protect confidential information is required – continued to dominate the higher security scores. Again, using the 1/high-risk to 4/low-risk scale, COBOL was the only technology to score above a 3.0 in security (~3.7). It was noted in the report that since COBOL applications run in mainframe environments they are less exposed to Internet-based security challenges, which could boost its result.
Also for the second year in a row, .NET applications brought up the rear among those technologies with poor security scores. Most technologies except COBOL and .NET were grouped in or around a score of 2.5, .NET was the only technology whose security score approached a 2.0 representing moderately high risk. It was somewhat surprising that security scores for the other technologies are lower. The report posits that developers pay less attention to security where it is not mandated by government regulation.
Another confirmation came in the area of application quality in larger applications. Once again, CAST’s results contraindicate the common belief that the quality of an application degrades as it becomes larger.  The Total Quality Index (TQI) – a composite of the five quality characteristic scores (transferability, changeability, robustness, performance and security) – did not correlate with the size of application…in most cases. The theory offered here by CAST is that newer languages encourage modularity and other practices that control the amount of complexity added as an application grows larger.
The lone exception to this again seems to be COBOL. When CAST broke down the sample by technology, it did see that, among larger COBOL applications, there is a tendency for a lower TQI score. Moreover, the study identified the percentage of highly complex objects in applications breaking them down by technology. Generally speaking, the more complex objects are, the greater the potential for quality issues arising, particularly around changeability (i.e., maintenance) and transferability (i.e., ease of change or customization). COBOL had far and away more highly complex objects – with a median score in the 65% range – while none of the other technologies had as much as 20%, with most showing medians of 15% or lower.
The final confirmation of last year’s findings revolved around transferability and changeability, critical components of an application’s cost of ownership. This year as last year, when compared by industry segment, transferability and changeability scores for apps used by government agencies were lower than for other segments, although government’s changeability scores were not significantly lower than other industries.
Nevertheless, government’s poor showing in transferability and changeability indicates that IT applications used by government agencies are more difficult to maintain than in other industries. While real cost data was not part of this report, it stands to reason that government agencies are spending significantly more of their IT budgets on maintaining existing applications than creating new ones. This is a hypothesis supported by the Gartner 2010 IT Staffing & Spending Report, which noted the government sector spends about 73% of its budget on maintenance, higher than any other segment.
I welcome your feedback about these confirmed elements of the CRASH report. Stay tuned for future posts about the report’s new findings and its exposure of the increasing seriousness of technical debt in IT applications.