Overcoming the Need for Greed

Developing software, like almost any facet of business, often can be overtaken by some rather sinful thoughts and actions. This is why I really enjoyed a recent post on GigaOm by Magne Land, scrum master and tech lead at RightScale who compares issues within software development to the “Seven Deadly Sins.”
While Land makes a great case for each of the sins, the one that resonates the most is the sin of Greed.
Anyone who ever saw Wall Street knows that every for profit business in the world is pushed by greed. The main character, Gordon Gekko, goes so far as to say, “Greed is good.” But in the development of software, greed is not good. In fact, as Land points out:
The problem is that greed leads to short-term goals, which leads to technical debt and long-term slowness.
Forgive Us Our Technical Debt
When it comes to software, greed equals speed and as I have previously noted in this blog when it comes to software, “Speed Kills.” Nevertheless, companies seem to believe that getting software out the door faster is the financially wise move. In truth, this is probably the wrong move for long-term benefit.
As the 2011 CRASH report released by CAST in December pointed out, applications carry on average $3.61 of technical debt per line of code. Considering that 15% of the applications studied exceeded one-million lines of code, this means a significant portion of the apps studied exceed $3M in technical debt! Why any company would concede $3 million of its IT budget every year just to fix the problems that it felt were not important enough to slow down and fix the first time is beyond me…but it is not beyond those mired in the sin of greed.
Even beyond the financial concession, more than one-third (35%) of the violations identified in the report and which comprise technical debt are the types that would have a direct, significant negative impact on business. These violations fall into the areas of performance, security and robustness (uptime) of applications and provide corroboration that companies must pay greater attention to the structural quality of applications or they are likely to face very costly problems ranging from application lag time to outages to security breaches, all of which can cost organizations money and adversely affect their reputations.
The only reason any company would accept this is because of greed.
Lead Us Not Into Temptation
When it comes to technical debt, a company needs to determine how much, if anything it should put into remediating it.
This is what technical debt does – it puts a monetary figure on application structural quality and enables comparisons that were not possible before between IT costs and potential losses due to failure. The goal is to keep the number of structural quality violations – or more importantly, the cost to fix them – well below the cost the company would incur should they be deployed and a failure ensued.
The most effective and efficient route to identifying technical debt is one that evaluates the structural quality of a company’s most mission-critical applications through a platform of automated analysis and measurement. As each of the applications is built, rewritten or customized, a company measures its structural quality at every major release and when the applications are in operation, it measures their structural quality every quarter.
In particular, companies must keep a watchful eye on the violation count; monitor the changes in the violation count and calculate the technical debt of the application after each quality assessment. Once the company has a dollar figure on technical debt, it needs to compare it to the business value to determine how much technical debt is too much and how much is acceptable based on the marginal return on business value (see graphic depiction above).
Once technical debt is identified and monetized, and a determination of the tipping point is made, companies can turn that identification into an action plan that works to reduce the errors that lead to technical debt in a financially prudent manner. This leaves companies still delivering software on time, but delivering software that is far more structurally sound…and which forgives the sin of greed.

Next AppDev Star

We’re a society that is always looking for the “next big thing.”
Just check out the TV listings. We tune in to find out who will be the “Next Top Model,” “Next Food Network Star,” “Next Design Star” and “Next Iron Chef.” Technology is also quite interested in “The Next Big Thing” as witnessed by the 19.9 million results you get when you Google “Next Big Thing in Technology.” But while most of the TV “Next” searches focus on the individual, most of the “next big things” discussed in Tech have been on a trend level.
Today, though, CAST and Dr. Dobb’s, the most respected development-focused brand for software development professionals, launched their search for the “Next Application Development Star” with the announcement of the BreakOut Award. The BreakOut Award is a search for the world’s best, undiscovered new software application and the winning developer will walk away with a grand prize of $10,000!
Besides uncovering the next rising star in the application development world, the BreakOut Award competition seeks to encourage innovation while also stressing the importance of software structural quality. Developers – both independent and those from corporate teams – will have their applications judged by a panel of Tech industry experts from around the globe including Dr. Dobb’s Editor in Chief Andrew Binstock , and senior leaders and CEOs from Gartner, GoodData, Hubspot, IBM Global Business Services, Kimberly-Clark and TechHub. In addition, entrants will have their applications analyzed for structural quality using CAST’s Highlight application.
Developers can register at BreakOut.DrDobbs.com, download the Highlight App from the website and use it to perform the analysis on the app and upload the results on BreakOut.DrDobbs.com.

Will the REAL Agile Please Stand Up?

I hate Geometry.
Actually, I do not hate the concept of Geometry – I’m rather partial to shapes and appreciate the need to calculate the areas, perimeters, volumes, et al that they represent. What I hate about the subject – or should I say “hated” (past tense) since I haven’t had a Geometry class since the mid-1980’s – were the proofs I had to do in order to get full credit for my work.
I’m a results-oriented person. I’m usually far more concerned about getting things done right than getting things done the right way. Sometimes I think there’s a bit too much focus on the process of how things are done and not enough focus on the quality of the result that comes from that process.
This is one of the reasons why I am vexed by the frequent bastardization of real agile development where the need for a speedy process overtakes quality initiatives.
Right Path vs. Right Answer
When done right, agile is not a problem. When done wrong, agile is the problem.
The intent of agile is to break a project into X-number of pieces so that instead of one group trudging through an entire project, many groups – or scrums – develop the parts of an application in a fraction of time. Not only should this make the development process more efficient, but it should also allow developers the time to scrutinize the work they do so the result is of a greater structural quality.
As certified SCRUM master and Agile proponent Charlie Martin recently posted over at Software Quality Connection, however, far too many companies use agile as “window dressing.”
Martin notes:
…people learn the phrase “agile methods” and think they sound cool, but they adopt only some of the window dressing – something I’ve called “cargo cult agile.” There’s this general notion that as long as you do some of the things that “agile people” do, you’re using agile methods.
What can be even more insidious, though, is when people think of “agile” in terms what agile methods don’t do. Agile minimizes upfront design, so obviously you’re being agile if you eliminate upfront design. Agile minimizes process, so you just eliminate process.
The result is what Dilbert‘s boss summarized as “No more planning and no more documentation. Just start programming and complaining.
Martin points out that this half-hearted approach to agile actually is contrary to the tenets of the true Agile Manifesto, which preaches the minimization of overhead, adapting to change, putting interaction, collaboration and communication ahead of process and, perhaps most importantly, comprehensive concern over the quality of the resulting software.
Processing Perfection
Martin says that achieving quality in an agile environment can be hard; that it takes discipline.
It means saying “No,” and often you’re saying No to your boss. No, you can’t come on Thursday and get an additional feature on Friday; put it into the user stories and we’ll schedule it. No, you can’t shorten the schedule by two weeks unless you choose which user stories you don’t need.
Unfortunately, in this day and age such discipline can be difficult to justify and developers may need to support their need to slow down the process. One thing they could do is to incorporate automated analysis and measurement into the build process to identify issues as they go. These issues can then be translated into financial terms (along the lines of how CAST calculated technical debt in its CRASH study in December), which can place developers and management on common ground for discussing the need to place more focus on structural quality of the software rather than committing all efforts to a “just get it done fast” attitude.
After all, and with all due respect to the late John Downs, my ninth grade Geometry teacher, the quality of the result is more important than the process.
 

CRASHing Into Technical Debt

Without going into specific finances, I make twice as much money as I did just 10 years ago. You would think this would be an indication that times, for me anyway, are good; yet I still seem to have the same question every month the week before I get paid, “Where did all my money go?”
It really isn’t rocket science, though. While my income has more than doubled, my debts have gone up at least that much, if not more. Besides the obvious inflation factors (food, gas and entertainment costs have all gone way up in the last decade), there are many other things for which I am indebted. I now own a home, have a child and whereas a decade ago I drove a used car that I paid for in cash, I now drive a nice SUV…that has almost three years worth of installment payments left on it.
Were I to suddenly lose my job or have to take a pay cut like so many others in this economy, there are areas where I would need to cut back. Obviously, I would cut my entertainment budget first followed by other non-critical things. The deeper the cuts went, though, the more I would need to sit down and calculate just which debt could be cut and which was necessary debt.
It is in much the same way that companies need to look at their technical debt and was what CAST had in mind when it performed its calculations in its recent CAST Report on Application Software Health (CRASH) study.
Looking for Not-So-Easy Money
When calculating technical debt in its recent CRASH study, CAST set out to establish a realistic, true-to-business type of approach rather than merely taking the authoritative approach. Building off the methodology of its original software quality study in 2010, CAST made adjustments to enhance and improve the calculation…and according to the architect of the study, Dr. Bill Curtis, they are still open to ways to improve it.
“Our goal is to provide an automated and repeatable process for our many clients to use technical debt as an indicator,” says Curtis. “We provide this information in combination with many technical, quality and productivity measures to provide guidance to our clients.”
As cited in the CRASH report, CAST’s approach to calculating technical debt can be defined by the following:

The density of coding violations per thousand lines of code (KLOC) is derived from source code analysis using the CAST Application Intelligence Platform (AIP). The coding violations highlight coding issues around the five health factors of application software: Security, Performance, Robustness, Transferability and Changeability.
Coding violations are categorized into low, medium and high severity violations. In developing the estimate of technical debt, it is assumed that only 50% of high severity problems, 25% of moderate severity problems and 10% of low severity problems will ultimately be corrected in the normal course of operating the application.
Conservative estimates of time and cost were used all around. To be conservative, it is assumed that low, moderate and high severity problems would each take one hour to fix, although industry data suggest these numbers should be higher – in many cases much higher – especially when the fix is applied during operation. The estimated rate for the developer who fixes the problem is also conservatively estimated at an average burdened rate of $75 per hour.
Technical debt is therefore calculated by taking the sum of 10% of Low Severity Violations, 25% of Medium Severity Violations and 50% of High Severity Violations, then multiplying that sum by the number of hours needed to fix the problems and multiplying that product by the cost per hour ($75) to fix the problems.

Pay Me My Money Down
Great, so now we know how to calculate technical debt…but what’s next?
Technical debt can identify just how much issues with application software are costing a company, but then the question arises: “What do we do with that information?” The answer to is to develop a technical debt action plan that determines how much technical debt can be absorbed before the application (or applications) in question begin to lose their value to the business.
Sounds complicated, but CIOs and heads of applications can start by using an automated system to evaluate the structural quality of their five most mission-critical applications. As each of these applications is built, measure its structural quality at every major release or, if the applications are in production, measure their structural quality every quarter.
In particular, keep a watchful eye on the violation count; monitor the changes in the violation count and calculate the technical debt of the application after each quality assessment. Once you have a dollar figure on technical debt, compare it to the business value to determine how much technical debt is acceptable versus how much is too much based on the marginal return on business value. (A framework for calculating the loss of business value due to structural quality violations can be found in “The Business Value of Application Internal Quality” by Dr. Bill Curtis.)
Calculating technical debt and how to manage it is not that different from managing personal debt; in fact, it can be easier because it is more formulaic.
I sure wish I could just as easily find a framework for calculating the loss of value of my SUV.

The Value of Customer Satisfaction

On a recent trip to Paris, I needed a break from the classic French cuisine. My stomach grumbled as I walked along the Marais, I encountered a line of people standing outside a restaurant. Now, I knew nothing about this place but I put my faith in the wisdom of crowds. It turned out to be an Israeli restaurant that specialized in falafel.  Actually, “The World’s Greatest Falafel,” according to Lenny Kravitz (as the tattered green sign posted on the wall claimed).
I stood in line for 20 minutes nervous about how to say “hold the onions” in French. As I approached the counter, the man asked what I wanted. “Falafel…,” I managed to stutter, as he lept back from the counter and hurriedly began constructing this massive pita-shaped sandwich. I tried to interject, but between my terrible French and the din of the kitchen, he couldn’t hear me. So I paid him and took my sandwich to a nearby bench.
I have to tell you it could have been the best sandwich, let alone falafel, I’ve ever had. It was the perfect combination of hot and cold, crispy and smooth. The falafel themselves were undersized — based on my previous experiences — but they were crispy and flavorful. The raita was cold and sweet. The combination of vegetables could have only been concocted by a madman – or someone cleaning out the fridge. I ate that thing quickly — and with a giant grin on my face. Plus, although it seemed massive at the time, it didn’t make me feel bloated and no indigestion!  #bonus!
So why do you care?
Obviously, that falafel was something special. A well-constructed meal that blew away my expectations of what a falafel, nay a sandwich, should be. In other words, I’m a VERY satisfied customer.
So what?
Well, here I am dedicating my time to tell you about it. I’ve already ranted about it to my friends and family. I’m definitely going back to get another one and I’m bringing people with me.  If they raise the price or the exchange rate gets worse, I’m still getting one.
The point is that “customer satisfaction” is a real value driver. There’s a ton of research on customer satisfaction, loyalty and financial performance if you don’t believe me. But, like me, you are a consumer and you know it’s true too. When you encounter a quality interaction with an individual, product or company, you notice it. And these days that is so rare that it stands out in your brain – doesn’t it?

The Speed of Diligence

One of the oldest conversations on record is the discussion of how to measure effective software development. One of the most used, most abused and least understood metric is “velocity.” Think Corvettes versus Volkswagens.
Just to keep terms straight, velocity is the sum of the estimates of delivered/accepted features per iteration. Velocity can be measured in the same units as feature estimates, whether this is story points, days, ideal days or hours.
On the one hand, velocity is a very simple measure for evaluating the speed at which teams deliver business value. It can provide tremendous insight into a project’s progress and status. Velocity will tend to stabilize over the lifecycle of a project unless the project team varies or the length of the iteration changes. Due to this, it can be valuable tool for future planning purposes. If you, as a planner, accept that project goals and teams may change over time, velocity can be used to plan releases deep into the future. Velocity proponents claim many managers over-think the idea of velocity and add too much complexity to it.
The Dark Side of Velocity
On the other hand, like many metrics that are attractive because of their simplicity, velocity has a dark side, notes Esther Derby in a recent post. She notes that velocity is easy to manipulate or misuse. You can alter the definition of “done” and finish more stories. Managers can use velocity as a metric to compare, reward and/or punish teams.
Velocity emphasizes the wrong attributes. It implies that if velocity isn’t continuously increasing or is erratic, there’s a problem with the development team. This could potentially be true, but there are many factors that affect velocity that are out of the hands of the development team. The fact that velocity tends to stabilize over time, making it a good predictive tool, also can punish teams where managers expect continuous improvement. There might also be issues with how work flows to the team, and the team might be interrupted periodically with support calls or other activities that disrupt their work.
Some managers shift the focus of velocity to measure the rate at which a project is moving forward (versus how much a team is producing). This also creates issues, because it incents the team to avoid the hygenic aspects of the project in favor of maximizing the amount of code that’s written.
Still other managers use velocity to measure the rate at which a team completes work. This is a measure of how encumbered they are by circumstances such as their knowledge levels, bureaucratic overhead, technical debt, external vetoes and other issues. It leads to people doing busy-work just to generate a high level of activity.
It’s also easy to manipulate velocity. If you want velocity to go up, just redefine activity – what used to be a 2-point story is now a 5-point story.
It’s About the Value to the Customer
Managers should remember that velocity does not equal productivity. Velocity manages how much a team is getting done. It doesn’t tell you if the team should be getting that job done faster or not. The most effective way to do that is to use industry standards, such as function points, to try to measure productivity.
A team could continuously develop the same amount of software and be unproductive. Why? Because the team might find out later that the code they delivered didn’t provide any value. At the end of the day, it’s the satisfaction of the customers that determines the ultimate success of value.
An important part of measuring value is analyzing the quality of the software developed. Introducing transparency into application development, maintenance and sourcing helps ensure the most effective and productive software development process. It also reduces the potential for software-related business disruption and risk, while reducing IT costs.
And while the Corvette might get the girls, the Volkswagen is often the better solution over the long haul.

Is Your ‘Wastebook’ Overflowing?

Every year, Senator Tom Coburn compiles the Wastebook – A Guide to Some of the Most Wasteful and Low Priority Government Spending of 2011.
According to Senator Coburn in the report:
“This report details 100 of the countless unnecessary, duplicative  or just plain stupid projects spread throughout the federal government  and paid for with your tax dollars this year that highlight the out-of-control and shortsighted spending excesses in Washington.”
The list of $7B in wasteful projects is as amusing as it is disturbing. As a taxpayer, reading a list of this ilk makes our leaders and decision-makers seem detached, self-serving and simply incompetent. Political motivations aside, Coburn’s act of holding a mirror to the process – not to mention to the decision-makers – is a relevant example for us all.
In this economy, none of us can continue to get away with spending money poorly. The rationale behind decisions and priorities isn’t always apparent to those outside the process. However, especially in the business world, we are counting on those people for support and to execute such decisions.
Fact-Based Insight

As reported in Government Computer News, this year’s report includes a number of high-tech projects “involving Twitter, Facebook, video games, podcasts, holographs and other new technologies.”
The ability to support decisions quickly and objectively has always been a portfolio management struggle. Many companies look to enterprise tools to bear this burden, when in reality most tools in the market are spreadsheets on steroids and hollow shells that simply automate poor practices. Instead, organizations should shift their focuses to systems and processes that provide objective rationalization of their portfolios. We usually know such practices by the monikers Application Performance Management (APM) or Performance Prediction Model (PPM).
One of the most obvious examples of when APM or PPM fails is found in the notion of “technical risk”. Many tools offer the ability to view a portfolio in two dimensions: business value and technical risk. The business value component typically applies some science to user-defined inputs concerning an organization’s mission-critical applications. In similar fashion, these tools take the same approach when deriving technical risk. Combining these two semi-subjective indicators provides “insight” for decision-makers.
Taking the Risk Out of Technical Risk
There are many definitions of technical risk including this one from BusinessDictionary.com: “Exposure to loss arising from activities such as design and engineering, manufacturing, technological processes and test procedures.”
Rather than utilizing subjective input on these parameters, wouldn’t employing a more rigorous process that provides objective assessment of technical risk provide far better information?  First, it would reduce the gaming of the portfolio rationalization process. Second, a repeatable and automated process would reduce risk and effort to generate better quality information.
If this capability existed today, I wonder how many projects would make it into the “Wastebook” portfolios of corporate IT departments. Well it does exist in the form of automated analysis and measurement platforms that provide companies with the comprehensive visibility and control needed to achieve significantly more business productivity from complex applications with an objective and repeatable way to measure and improve the application software quality. And not only can they improve application software quality, they can identify areas of technical risk, help monetize them, and provide a basis for comparison with business value to determine a course of action.
I wonder what Government Computer News would have said about the duplicative and shortsighted decisions made by IT departments in the private sector if it were to apply automated analysis and measurement. Would it find the private sector’s “Wastebook” as overflowing as the government’s?