Introducing Security into Mainstream Development – Part 2

Last month, I had the opportunity to discuss the expanding threat of mobile IT security with CAST’s audience. The feedback we got was so overwhelming, I wanted to answer the questions we might have missed here on the blog. Lev already answered some of your questions in a previous post, so for my follow-up post, I’ll focus on the risks that often go ignored throughout the software development process.

Crash Course on CRASH Report, part 3: Technical Debt

Money isn’t everything…yeah, right!
There are few, if any, who are so idealistic in this world that they will actually believe money isn’t everything. It doesn’t matter if it’s the scheduled time for a television show or a high-level decision to produce a controversial product, the motivation is money.
Need more convincing? Look at it from a “life imitates art” point of view – what is the most prevalent premise behind most TV shows and movies? Money…either the quest for it or the painstaking process of deciding to set it aside for other interests (e.g., love and family). While some will say the latter proves that money isn’t everything, there wouldn’t be a struggle over such a decision if it wasn’t mightily important.
This is why of all the things identified in December’s CAST Report on Application Software Health – aka the CRASH report – the findings on the state of technical debt being accrued by companies worldwide is the most compelling argument to get the structural quality of application software in check. In this third and final installment of my deeper look into the CRASH report (previous installments looked at “Confirmed Findings” and “New Insights“), I’ll focus on what it reported about the technical debt in business applications.
Show me the Money
As I’ve previously stated, technical debt is the cost incurred by a company to resolve issues with applications that were not addressed prior to the rollout of the software. What this essentially means is technical debt is money that did not have to be spent.
Technical is a term that’s been around for quite some time, but it did not truly become a marquis concern until 2010 when Gartner’s Andy Kyte reported that technical debt is quickly closing in on the $1 trillion mark – a level he predicted would be reached by 2015.
As with CAST’s 2010 report on software quality, the 2011 CRASH report looks at technical debt on a smaller basis – per application. Nevertheless, it offers a grim tale of technical debt being accrued by companies.
This year’s study, which analyzed and measured the structural quality of 365 million lines of code within 745 IT applications used by 160 companies throughout 10 industries, determined that technical debt has grown to $3.61 per line of code. That means even small to medium sized applications that run about 300,000 lines of code surpass the million-dollar mark in technical debt. Moreover, as CAST Chief Scientist Dr. Bill Curtis pointed out, “A significant number of applications examined in the study – nearly 15% – had over a million lines of code, which means even the smallest of those contains over $3.6 million in technical debt.”
A good portion of the increase in technical debt per line of code in the 2011 report versus the 2010 report from CAST (which found $2.82 per line of code) was the inclusion of more Java applications in the more recent study. In a previous iteration of this look at the CRASH report I noted that Java applications were found to have a significantly lower Total Quality Index (TQI) score than other platforms; in fact, Java was the only platform that had a median TQI lower than 3.0.
It should come as no surprise, then that Java applications also carried the highest amount of technical debt – $5.42 per line of code as compared to COBOL (the lowest technical debt per line of code), which carried only $1.26 per line of code.
The Color of Money
With figures topping the $1 million mark or more on average for technical debt, businesses should be taking notice. What is somewhat unfortunate for these businesses, however, is that the CRASH report also found that much of the technical debt being accrued does not appear in the dependability, security or performance of applications, but rather in the transferability and changeability – i.e., the maintenance – of application software.
These health characteristics of application software tend to receive less attention than front facing issues of performance and security because they are not the things that prevent customers from purchasing and using an application. But the CRASH report determined that 70% of accrued technical debt results from poor quality in terms of changeability (the ease with which an application can be changed or adapted) and transferability (the ability for others to change or customize code in an application).  These are the areas which cost companies money – money that would otherwise be used to bolster innovation.
Measuring and analyzing the quality and complexity of projects through automated solutions would contribute greatly to reducing technical debt and should be incorporated into the planning and development process. Spending a little money in the preproduction process sure beats paying a lot of money to fix issues after deployment.
And at the end of the day, it’s all about the money, bread, bucks, clams, dough, greenbacks, loot, moolah, gelt…
 

A Recipe for Quantifying the ROI on Improving Process Maturity

Types of Process Frameworks
1.What should I do? — Process Definition Frameworks (Associated Metric Type = Objective or End-Result, or Benefit)

ITIL
MOF
ISO, etc.

2. How well am I doing it? – Control/Audit/Maturity Frameworks (Associated Metric Type = Operational Level or Quality Indicator)

BS 15000
CMMI, etc.

3.How can I improve it? – Process Improvement Frameworks (Associated Metric Type = mix of above two metric types)

Six Sigma
TQM
Lean, etc.

A Recipe for Quantifying the ROI on Improving Process Maturity — the bang for the buck on improving the repeatability and quality of processes:
1.Define the end-result metric (e.g. support cost per year in $)
2. Define the process performance metric (e.g. defect injection rate)
3. Quantify the end-result metric as a function of the process performance metric — e.g. support cost per year = f(defect injection rate)
4. Define processes and break down process activities (e.g. testing process)
5. Define activity maturity levels — what it means to be at maturity level 1, level 2, etc.
6. Quantify the change in performance metrics due to change from one maturity level to another.
7. Use (3) to calculate the benefit of moving from one maturity level to another.
8. Quantify the cost of moving from one maturity level to another (one-time and ongoing cost categories are: organizational change, business process change, software licences, productivity loss, administrative costs)
9. Divide (7) by (8) to calculate the ROI of process maturity improvement