Gartner Webinar: Get Smart about Technical Debt

Over the past 10 years or so, it has been interesting to watch the metaphor of Technical Debt grow and evolve.  Like most topics or issues in software development, there aren’t many concepts or practices that are fully embraced by the industry without some debate or controversy.  Regardless of your personal thoughts on the topic, you must admit that the concept of Technical Debt seems to resonate strongly outside of development teams and has fueled the imagination of others to expound on the concept and include additional areas such as design debt or other metaphors.  There are now a spate of resources dedicated to the topic including the industry aggregation site: OnTechnicalDebt.com
We recently had David Norton, Research Director with Gartner Research as the guest speaker on a webinar, “Get Smart about Technical Debt”.  During the webinar, Mr. Norton was passionate about the topic and he believes that Technical Debt will transcend the casual use by architects (and marketers) to find a more permanent place among the vernacular of CIOs and CFOs.  Spend just a little time with Mr. Norton and it is clear that he is on a quest to drive the concept as an important indicator of risk, and the practice of measuring and monitoring Technical Debt will soon become a requirement as the industry continues to mature.
In my personal view, Technical Debt, although not perfect, is one of the few development metrics that rises above the techno-speak of the dev team.   Technical Debt seems to resonate because it attempts to quantify the uncertainty within a product or development process—an uncertainty that underlines all the years of process improvement initiatives, training classes, project management tools and overhead that is typically forced upon dev teams.
I believe that rather than fighting this metaphor, dev teams should embrace Technical Debt and work with the organization to create a common definition and method for calculating it.   It takes very little overhead to provide on-going measurement once a definition and method is determined.  To me, the value to the development organization is that we are now armed with a “hard cost” for poor or myopic decisions.
I really enjoyed Mr. Norton’s passion and the great discussions/questions from those that joined us on the webinar. If you missed it you can watch the recording here.  I’d be interested in your take on his view.
If you want to learn more about Technical Debt, there are some great resources listed below:

Java Application Architecture: Modularity Patterns with Examples Using OSGi (Agile Software Development Series) by Kirk Knoernschild
The Economics of Software Quality by Capers Jones and Olivier Bonsignour
Paying Down the Interest on Your Applications: A Guide to Measuring and Managing Technical Debt white paper from CAST
The CRASH Report – 2011/12 (CAST Report on Application Software Health)

How to Monetize Application Technical Debt research paper with Gartner and CAST
 

ID’ing the Debt

Last fall, Gartner’s Andy Kyte issued a wake-up call about technical debt that was akin to a piano being dropped on the head of the IT industry. In estimating that technical debt – the cost to fix the structural quality problems in an application that, if left unfixed, put the business at serious risk – has already reached $500 billion globally and is fast on its way to exceeding $1 trillion by 2015, Kyte stirred up a hornet’s nest of activity around the topic.
So far, much of the buzzing from those hornets has been in the form of continuing to discuss and expose the problem; we’ve seen plenty of articles calling for a change and even comparing the ignoring of technical debt to how so many Americans ignore their personal debt. What we haven’t seen many of (except in this blog space) are any concrete solutions for technical debt.
That’s why when Vijay Narayanan in his “Art of Software Reuse” blog offered “9 Quick Tips to Reducing Technical Debt,” I had to see what he could offer that had, up to now, escaped the columns of most members of the media.
Tech Debt Tips
Narayanan accurately notes that “Reducing technical debt is an integral aspect of refactoring” and proceeds to offer his nine methods to accomplish that. Among these nine tips, he calls for minimizing redundancy, increasing consistency and suggests developers “Harmonize multiple, incompatible interfaces and make them more domain relevant.”
The last of his tips is, not surprisingly, to “create a comprehensive suite of automated tests.”
I think we’ve been here before. Another column saying don’t do the things that don’t work and then test everything once it’s done. That may plug up some of the holes that result in technical debt, but it certainly won’t eliminate it.
Testing vs. Assessing
As we’ve previously pointed out in this blog, testing can only address an application’s “external quality.” Testers can effectively address only visible symptoms such as correctness, efficiency or maintenance costs. What lies beneath the surface, however, the internal quality, directly impacts the external quality and can lead to even greater issues. These characteristics – program structure, complexity, coding practices, coupling, testability, reusability, maintainability, readability and flexibility – are the invisible root of the software quality iceberg and can do far more damage to a company’s reputation and IT maintenance budget than the visible issues.
For this reason, we’re finding more and more companies employing automated analysis and measurement of all critical applications. Such a service ensures that quality is built into systems with every developer contribution – whether the software is being built from scratch or being customized.
Setting a Technical Debt Threshold
When it comes to technical debt, a company needs to determine how much, if anything it should put into remediating it.
This is what technical debt does – it puts a monetary figure on application structural quality and enables comparisons that were not possible before between IT costs and potential losses due to failure. The goal is to keep the number of structural quality violations – or more importantly, the cost to fix them – well below the cost the company would incur should they be deployed and a failure ensued.
A Technical Debt Action Plan
The most effective and efficient route to identifying technical debt is one that evaluates the structural quality of a company’s most mission-critical applications through a platform of automated analysis and measurement. As each of the applications is built, rewritten or customized, a company measures its structural quality at every major release and when the applications are in operation, it measures their structural quality every quarter.
In particular, companies must keep a watchful eye on the violation count; monitor the changes in the violation count and calculate the technical debt of the application after each quality assessment. Once the company has a dollar figure on technical debt, it needs to compare it to the business value to determine how much technical debt is too much and how much is acceptable based on the marginal return on business value (see graphic depiction above). There are several publications on the market  – including “The Business Value of Application Internal Quality” by Dr. Bill Curtis – that can provide a framework for calculating the loss of business value due to structural quality violations.
Once technical debt is identified and monetized, and a determination of the tipping point is made, companies can turn that identification into an action plan that works to reduce the errors that lead to technical debt in a financially prudent manner. This leaves companies with software that is still delivered on time, but is far more structurally sound and does not carry the risks that are causing technical debt to challenge the national debt in size.
 
Last fall, Gartner’s Andy Kyte issued a wake up call about technical debt that was akin to a piano being dropped on the head of the IT industry. In estimating that technical debt – the cost to fix the structural quality problems in an application that, if left unfixed, put the business at serious risk – has already reached $500 billion globally and is fast on its way to exceeding $1 trillion by 2015, Kyte stirred up a hornets’ nest of activity around the topic.
So far, much of the buzzing from those hornets has been in the form of continuing to discuss and expose the problem; we’ve seen plenty of articles calling for a change and even comparing the ignoring of technical debt to how so many Americans ignore their personal debt. What we haven’t seen many of (except in this blog space) are any concrete solutions for technical debt.
That’s why when Vijay Narayanan in his “Art of Software Reuse” blog offered “9 Quick Tips to Reducing Technical Debt,” I had to see what he could offer that had, up to now, escaped the columns of most members of the media.
Tech Debt Tips
Narayanan accurately notes that “Reducing technical debt is an integral aspect of refactoring” and proceeds to offer his nine methods to accomplish that. Among these nine tips, he calls for minimizing redundancy, increasing consistency and suggests developers “Harmonize multiple, incompatible interfaces and make them more domain relevant.”
The last of his tips is, not surprisingly, to “create a comprehensive suite of automated tests.”
I think we’ve been here before. Another column saying don’t do the things that don’t work and then test everything once it’s done. That may plug up some of the holes that result in technical debt, but it certainly won’t eliminate it.
Testing vs. Assessing
As we’ve previously pointed out in this blog, testing can only address an application’s “external quality.” Testers can effectively address only visible symptoms such as correctness, efficiency or maintenance costs. What lies beneath the surface, however, the internal quality, directly impacts the external quality and can lead to even greater issues. These characteristics – program structure, complexity, coding practices, coupling, testability, reusability, maintainability, readability and flexibility – are the invisible root of the software quality iceberg and can do far more damage to a company’s reputation and IT maintenance budget than the visible issues.
For this reason, we’re finding more and more companies employing automated analysis and measurement of all critical applications. Such a service ensures that quality is built into systems with every developer contribution – whether the software is being built from scratch or being customized.
Setting a Technical Debt Threshold
When it comes to Technical Debt, a company needs to determine how much, if anything it should put into remediating it.
This is what Technical Debt does – it puts a monetary figure on application structural quality and enables comparisons that were not possible before between IT costs and potential losses due to failure. The goal is to keep the number of structural quality violations – or more importantly, the cost to fix them – well below the cost the company would incur should they be deployed and a failure ensued.
A Technical Debt Action Plan
Last fall, Gartner’s Andy Kyte issued a wake up call about technical debt that was akin to a piano being dropped on the head of the IT industry. In estimating that technical debt – the cost to fix the structural quality problems in an application that, if left unfixed, put the business at serious risk – has already reached $500 billion globally and is fast on its way to exceeding $1 trillion by 2015, Kyte stirred up a hornets’ nest of activity around the topic.
So far, much of the buzzing from those hornets has been in the form of continuing to discuss and expose the problem; we’ve seen plenty of articles calling for a change and even comparing the ignoring of technical debt to how so many Americans ignore their personal debt. What we haven’t seen many of (except in this blog space) are any concrete solutions for technical debt.
That’s why when Vijay Narayanan in his “Art of Software Reuse” blog offered “9 Quick Tips to Reducing Technical Debt,” I had to see what he could offer that had, up to now, escaped the columns of most members of the media.
Tech Debt Tips
Narayanan accurately notes that “Reducing technical debt is an integral aspect of refactoring” and proceeds to offer his nine methods to accomplish that.  Among these nine tips, he calls for minimizing redundancy, increasing consistency and suggests developers “Harmonize multiple, incompatible interfaces and make them more domain relevant.”
The last of his tips is, not surprisingly, to “create a comprehensive suite of automated tests.”
I think we’ve been here before. Another column saying don’t do the things that don’t work and then test everything once it’s done. That may plug up some of the holes that result in technical debt, but it certainly won’t eliminate it.
Testing vs. Assessing
As we’ve previously pointed out in this blog, testing can only address an application’s “external quality.” Testers can effectively address only visible symptoms such as correctness, efficiency or maintenance costs. What lies beneath the surface, however, the internal quality, directly impacts the external quality and can lead to even greater issues. These characteristics – program structure, complexity, coding practices, coupling, testability, reusability, maintainability, readability and flexibility – are the invisible root of the software quality iceberg and can do far more damage to a company’s reputation and IT maintenance budget than the visible issues.
For this reason, we’re finding more and more companies employing automated analysis and measurement of all critical applications. Such a service ensures that quality is built into systems with every developer contribution – whether the software is being built from scratch or being customized.
Setting a Technical Debt Threshold
When it comes to Technical Debt, a company needs to determine how much, if anything it should put into remediating it.
This is what Technical Debt does – it puts a monetary figure on application structural quality and enables comparisons that were not possible before between IT costs and potential losses due to failure.  The goal is to keep the number of structural quality violations – or more importantly, the cost to fix them – well below the cost the company would incur should they be deployed and a failure ensued.
A Technical Debt Action Plan
The most effective and efficient route to identifying technical debt is one that evaluates the structural quality of a company’s most mission-critical applications through a platform of automated analysis and measurement. As each of the applications is built, rewritten or customized, a company measures its structural quality at every major release and when the applications are in operation, it measures their structural quality every quarter.
In particular, companies must keep a watchful eye on the violation count; monitor the changes in the violation count and calculate the Technical Debt of the application after each quality assessment. Once the company has a dollar figure on Technical Debt, it needs to compare it to the business value to determine how much Technical Debt is too much and how much is acceptable based on the marginal return on business value (see graphic depiction below). There are several publications on the market  – including “The Business Value of Application Internal Quality” by Dr. Bill Curtis – that can provide a framework for calculating the loss of business value due to structural quality violations.
Once technical debt is identified and monetized, and a determination of the tipping point is made, companies can turn that identification into an action plan that works to reduce the errors that lead to technical debt in a financially prudent manner. This leaves companies with software that is still delivered on time, but is far more structurally sound and does not carry the risks that are causing technical debt to challenge the national debt in size.
The most effective and efficient route to identifying technical debt is one that evaluates the structural quality of a company’s most mission-critical applications through a platform of automated analysis and measurement. As each of the applications is built, rewritten or customized, a company measures its structural quality at every major release and when the applications are in operation, it measures their structural quality every quarter.
In particular, companies must keep a watchful eye on the violation count; monitor the changes in the violation count and calculate the Technical Debt of the application after each quality assessment. Once the company has a dollar figure on Technical Debt, it needs to compare it to the business value to determine how much Technical Debt is too much and how much is acceptable based on the marginal return on business value (see graphic depiction below). There are several publications on the market – including “The Business Value of Application Internal Quality” by Dr. Bill Curtis – that can provide a framework for calculating the loss of business value due to structural quality violations.
Once technical debt is identified and monetized, and a determination of the tipping point is made, companies can turn that identification into an action plan that works to reduce the errors that lead to technical debt in a financially prudent manner. This leaves companies with software that is still delivered on time, but is far more structurally sound and does not carry the risks that are causing technical debt to challenge the national debt in size.

Insecure Over Quality

The rate at which security issues have plagued businesses lately is staggering. Every week we hear of multiple vulnerabilities, millions of personal data records being exposed and corporations watching profits dwindle as reparation costs for these breaches extend into millions and even billions of dollars.
What’s worse than hearing about these things in the media is that the perception apparently is not even as bad as the reality of the situation.
George Hulme recently reported in CSO magazine that Veracode, a software services provider, released some pretty staggering findings based on security analyses it performed on more than 4,800 applications submitted to the firm. The findings, published in Veracode’s “State of Software Security Report,” showed that 58 percent of the applications submitted to the firm were of “unacceptable security quality.”
Now, you would think that companies that customize their “off-the-shelf” software might artificially inflate the number, but Hulme reports a rather shocking statistic from the report:
“The report found that 66 percent of applications developed by the software industry had unacceptable security quality, and a surprising 72 percent of security software met the same poor ranking.”
It’s kind of scary to think that security software is insecure. Clichés about “the fox watching the hen house” and “snake oil” would come immediately to mind if I were not at least relatively certain that security vendors really do mean well.
Security Complex
The apologetic innocence of each software vendor in the wake of discovering a breach might make one think their mantra should come from the lips of Jessica Rabbit of “Who Framed Roger Rabbit” fame, “I’m not bad, I’m just drawn that way.”
Nevertheless, security is a key health factor of software and the failure of companies across the globe to ensure complete security of their applications is a key contributor to the spiraling technical debt that exists – currently $500 billion globally according to Gartner and over $1 million per company on average according to CAST’s Annual Worldwide Application Software Quality Study.
When it comes to technical debt, security vendors appear to be suffering from the same issues as everyone else out there – there is a great deal of risk that exists within their application software that should have been identified before it was deployed. It’s not their intent to roll out vulnerable software, but just how should security vendors find the one line of code out of every 4,000 that could lead to failure?
Automating Security Quality
As well-intentioned as security vendors may be, it’s their job to get security right, just as it’s the job of Sony to keep its users’ financial data confidential and it’s the job of GlaxoSmithKline to keep confidential the types of prescription medications being taken by their customers. Regardless of how many mea culpas they offer and how sincere they may be, something needs to be done to shore up the vulnerabilities that leave application software open to being breached.
What needs shoring up is the structural quality of the software.
Vendors should increase the use of static analysis to measure the structural quality of applications by using automated analysis and measurement to vet all the health factors of software – security included. Only automated analysis and measurement can dig deep into application software and assess it against thousands of industry standards and norms to identify those elements within the application that pose significant risk of failure to users or expose the software to possible breach by hackers.
Muhammad Ali once said, “You can’t hit what you can’t see”; that applies to software, too. Only once issues are visible can application developers fix the problems and prevent security breaches from happening.  And if companies cannot uncover these areas of vulnerability, they will continue to be left feeling insecure about their security.

Managing Risk, Avoiding Disruption

I’ve written quite a bit about the spate of businesses that have suffered some form of disruption over the last few months – security breaches at Sony, Android malware attacks, system outages at the London Stock Exchange, operational system failures on London’s East Coast Line and numerous others. All these cases have had one thing in common: they all have had software structural issues as their root causes.
One recurring question arises from these failures, “How does a company avoid the structural flaws that lead to business interruption?”
CAST, in conjunction with Gartner, has released a white paper that discusses the importance of mitigating risk in software and avoiding the failures that plague businesses. The paper, titled, “Software Risk Management: A Primer for IT Executives,” makes the case that structural quality is the key to reducing the risk of business disruption.
Modern Goals, Modern Problems
Gartner Research Director Thomas Murphy, whose research is included in the white paper, notes that software quality is often a poor misnomer for the current practice of risk management applied by most companies. When it comes to practices and scheduling in software projects, the focus is not to drive quality but to mitigate delivery risk. However, as organizations seek to drive down maintenance costs and adapt to the shorter project life cycles found in agile practices, it’s equally or more important to focus on reducing the risk of business disruption.
As the CAST white paper shows, structural quality is essential for managing the root drivers of IT costs and business risks in mission-critical applications. Unlike the quality of the process by which software is built, enhanced and maintained, functional, non-functional and structural quality have to do with the software product itself – the asset that generates business value.
Accurately analyzing and measuring the quality of an application (which typically has a large number of components interconnected in complicated ways, and connections with databases, middleware and APIs) is monstrously complex. It can only be accomplished with an automated system that analyzes the inner structure of all components and evaluates their interactions in the context of the entire application.
More about the importance of focusing on structural quality and reducing business disruption risk is available in the Gartner-CAST white paper. An executive summary of the white paper is also available.

Forecast Upbeat for CAST

I’d like to begin by offering a resounding THANK YOU to CAST’s worldwide roster of customers and partners. It’s because of you that the good news just keeps coming from CAST!
A couple weeks ago, we announced that Gartner had named us to their “Cool Vendors” list, counting us among the vendors that “approach the application services market with fresh ideas” and “have the potential to change or affect a market in a significant way.”
That approach and potential translated into significant business gains in the first quarter of this year and today we announced that CAST experienced an 18% increase in first quarter revenue over 2010. In addition, in the first quarter of this year we added a number of significant new customers and struck lucrative deals with important new partners, expanding a channel reach that already included some of the leading systems integrators in the industry.
This success has not gone unnoticed.
Two European equity analyst firms – Oddo Securities and Gilbert Dupont – have started coverage of CAST with “buy” ratings and a target price. Raphaël Hoffstetter, an analyst at Oddo went so far as to say, “Positioned in a recent market that is in an expansionary phase, CAST should benefit from specific levers, for which the signals have become positive over recent quarters.”
But while Hoffstetter points to the market for our success, we know that the key to our performance starts in two places – our customers and our partners. Our customers paved the way for software measurement in the IT industry and our partners have been our comrades in arms, either opening new doors for us or helping us implement solutions for new clients.
And because we cannot possibly thank you enough, I’d like to finish right where I started, by saying to our partners and customers:
THANK YOU!!!

Yeah, We’re Cool

We’ve known it all along, and now the rest of the Tech industry has been told thanks to the folks at Gartner who earlier this month named us to their “Cool Vendors in Application Services, 2011” report.
For this report, Gartner selects vendors that “approach the application services market with fresh ideas” and “have the potential to change or affect a market in a significant way.” The list is “designed to highlight interesting, new and innovative vendors, products and services.”
The companies highlighted in this report “leverage emerging technologies, distinctive capabilities and proprietary intellectual capital to deliver increased value to their clients.” CAST has focused for over a decade of R&D on advanced technology and benchmarking capabilities to help IT managers continually analyze and measure their most sophisticated business applications.
As IT spending begins to rebound, we believe companies will spend some of that on making sure the applications they integrate into their systems are as high quality and risk-free as possible to keep down maintenance costs. After all, in business, the coolest thing is making money, but before you can make money, you need to reduce your costs wherever possible.
As cool as we are, though, there are cooler companies out there…specifically our customers, without whom we would not be part of this list. As our Chairman and CEO Vincent Delaroche said when we announced our inclusion on the Gartner “Cool Vendors” list, “I would like to thank our key enterprise customers for having paved the way for software measurement in the IT industry.”
Yeah, CAST is cool, but we’re not going to rest on our laurels. So if you think what we’ve done so far is cool, just wait to see what’s coming up next! (HINT: We previewed it in early March)