When I speak to customers and prospects trying to incorporate static code analysis into their software development processes, one of the most common questions that I get is “How do we incorporate the outputs of static analysis into SLAS?” Given the prevalence of outsourcing in Fortune 500 and Global 1000 companies, this question is not surprising. Companies have always struggled to measure the quality of products being delivered, beyond the typical defect densities measured after the fact.
To help organizations answer this and similar questions, I thought I would compile some frequently asked questions around introducing Structural Quality Metrics into SLAs. Before I get into the details, I want to caution readers against using these metrics simply for monitoring, or as a tool to penalize vendors. This approach invariably becomes counterproductive, and instead I recommend looking at these metrics as an opportunity to make the vendor-client relationship more transparent and fact-based—a win-win on both sides.
In addition to the FAQ’s below, for more on this topic you don’t want to miss our next webinar on May 16th. We are pleased to have Stephanie Moore, Vice President and Principal Analyst with Forrester Research discussing how to “Ensure Application Quality with Vendor Management Vigilance.” You can register here.
What kind of structural quality metrics can be included in SLAs?
Quality Indices: Code analysis solutions parse the source code and identify code patterns (rules) which could lead to
potential defects. By categorizing these improper code patterns into application health factors such as Security, Performance, Robustness, Changeability and Transferability, you can aggregate and assign a specific value to each category, like the Quality Index in the CAST Application Intelligence Platform (AIP). You should set a baseline for each of these health factors and monitor the overall health of the over time.
Specific Rules: Quality indices provide a macro picture of the structural quality of the application, however there are often
specific code patterns (rules) that you want to avoid. For example, if the application is already suffering from performance issues, you want to make sure to avoid any rule that would further degrade the performance. These specific rules should be incorporated into SLAs as “Critical Rules” with Zero Tolerances.
Productivity: Amount charged per KLOC (kilo lines of code) or per Function Point. Static analysis solutions should provide the size of the code base that is added in a given release. Along with KLOC, CAST AIP provides data on the number of Function Points that have been modified, added and deleted in a release. This is a very good metric, specially in a multi-vendor scenario where you can see how different vendors are charging you and can set targets and monitor productivity for each vendor.
How do you set targets for Structural Quality Metrics?
The ideal way to set targets is to analyze your applications for a minimum of two to three releases and use the average scores as a baseline.
An alternative method is to use industry benchmark data. CAST maintains data from hundreds of companies across different technologies and industries in a benchmarking repository called Appmarq, and it can be used to set targets based on industry averages or best-in-class performers. When do you introduce Structural Quality Metrics into an SLA?
Of course, the best time to introduce Structural Quality Metrics into SLAs is at the beginning of the contract, when it is the easiest to set expectations on quality objectives based on the static analysis solution outputs. However, if you are in the middle of a long-term contract with a vendor, you can try to make changes to the existing SLAs. A situation like this will require collaboration with the vendor to define common goals on why, how and when to use a static code analysis solution and what kind of metrics make the most sense in the context of those goals.
To hear an analyst perspective on achieving maturity in your outsourcing relationships, don’t forget to register for our webinar on May 16th with Forrester Analyst Stephanie Moore.
Over the past 10 years or so, it has been interesting to watch the metaphor of Technical Debt grow and evolve. Like most topics or issues in software development, there aren’t many concepts or practices that are fully embraced by the industry without some debate or controversy. Regardless of your personal thoughts on the topic, you must admit that the concept of Technical Debt seems to resonate strongly outside of development teams and has fueled the imagination of others to expound on the concept and include additional areas such as design debt or other metaphors. There are now a spate of resources dedicated to the topic including the industry aggregation site: OnTechnicalDebt.com
We recently had David Norton, Research Director with Gartner Research as the guest speaker on a webinar, “Get Smart about Technical Debt”. During the webinar, Mr. Norton was passionate about the topic and he believes that Technical Debt will transcend the casual use by architects (and marketers) to find a more permanent place among the vernacular of CIOs and CFOs. Spend just a little time with Mr. Norton and it is clear that he is on a quest to drive the concept as an important indicator of risk, and the practice of measuring and monitoring Technical Debt will soon become a requirement as the industry continues to mature.
In my personal view, Technical Debt, although not perfect, is one of the few development metrics that rises above the techno-speak of the dev team. Technical Debt seems to resonate because it attempts to quantify the uncertainty within a product or development process—an uncertainty that underlines all the years of process improvement initiatives, training classes, project management tools and overhead that is typically forced upon dev teams.
I believe that rather than fighting this metaphor, dev teams should embrace Technical Debt and work with the organization to create a common definition and method for calculating it. It takes very little overhead to provide on-going measurement once a definition and method is determined. To me, the value to the development organization is that we are now armed with a “hard cost” for poor or myopic decisions.
I really enjoyed Mr. Norton’s passion and the great discussions/questions from those that joined us on the webinar. If you missed it you can watch the recording here. I’d be interested in your take on his view.
If you want to learn more about Technical Debt, there are some great resources listed below:
The issue of hacking in today’s society has gotten as serious as a heart attack – literally!
In what seems like something that should be relegated to a bad action movie or the sinister deeds of some cartoon villain, researchers have demonstrated that hackers have the capability to send radio signals that could reprogram implantable medical devices, such as pacemakers or insulin pumps. Fortunately, there have been no actual cases of fiends roaming the streets striking dead people dependent upon pacemakers, but the mere fact that it is a possibility is frightening.
I honestly do not think that in his worst nightmare, Wilson Greatbach, the inventor of the implantable pacemaker, who passed away September 28 at the ripe old age of 92, could have envisioned someone using an external signal to disrupt the heart-regulating device or drain its battery causing the person’s heart to stop beating. However, in the sad reality that is modern society, where hackers need no reason to ply their dastardly deeds beyond, “I’m bored, what can I mess with?” it almost stands to reason – no matter how morbid that reasoning may be – that, when developing current generations of pacemakers, scientists need to consider how they can be hacked.
Don’t Go Breaking (Into) My Heart If it can be done in a lab it can be done in real life, so while the above scenario sounds frightening there is hope. Researchers at MIT and the University of Massachusetts are currently developing external radio-frequency jamming equipment that today’s pacemaker users can wear to protect themselves. Scientists are also working on embedding such equipment into future generations of pacemakers.
This brings up a good question, though – what else remains from previous generations in these medical devices that may be vulnerable to modern technology?
Improving on technology usually means not having to recreate the wheel. With all of the technology that goes into one of these devices, they cannot possibly be “re-invented” every time a new version is built or an improvement added. This means that legacy software abounds in these devices and code that may or may not have been vulnerable to breach years or even decades ago may now represent a weak link in the device.
Straight from the Heart
As science continues to build upon these devices and add improvements, one hopes that they are focusing on not only what’s new, but also what is old in them. The problem that exists there, of course, is that there are so many lines of code that need to be assessed. Add to that code that is written in antiquated languages, lines of code that do not need to be included or no longer meet up with current standards and device manufacturers cannot depend upon manual assessments, which would be grossly inefficient at uncovering possible issues with code that regulates, controls and monitors these devices.
Much as it does identifying issues with enterprise applications – the life’s blood of today’s business – automating analysis of the software that runs the device would certainly be a more efficient tool in identifying issues with embedded legacy applications in medical devices and ensuring the structural quality of the software that runs them.
By using automated analysis and measurement to identify issues with code embedded in medical devices, companies can get to the heart of the matter and keep unauthorized hacking of implanted medical devices something found only in the lab or the silver screen.
Recently, as I sat on the Northeast corridor train, the ticket-taker informed us that we would be delayed 15 minutes. As I thought about the impact on my day, a flutter of activity rippled through the cabin. Passengers called bosses, colleagues, wives and customers spreading the news. What was interesting was that the relayed news was different: some people doubled the time, others bumped it up to solid hour and, shockingly, no one made it shorter.
As a software project manager, I have had this reaction often. Whenever I received an estimate, I instinctively doubled it. However, as the product manager, I was in a position to drill down and get more insight into the origin of the estimate. Was it a WAG (Wild A** Guess) or was it thought out? What assumptions were used? What could I do to make it shorter? The estimate grew as I dug in, but so did the work. The conversation surfaced issues that needed to be discussed and brought clarity to the task at hand.
Now, as a regular passenger spending my money with NJ Transit, I felt compelled to “dig in” and see if I could help or understand the situation better. Was that estimate from the conductor or Penn Station? Why were we delayed? Is it a signal, power line or derailment? Any insight would help me determine the accuracy or credibility of the estimate, but that sort of interaction goes against social roles — and these days may even be a Federal offense!
In truth, I don’t want to be a railway project manager — but I do want transparency. And I believe that transparency is the primary source of stress in software development. How many times have you been under the gun trying to explain why certain features take longer than others? Or why a performance is so slow? Or why you are still looking for the cause of that bug — armed with nothing more than soundbites from your development team, when what you really needed was facts.
Getting On Track
Customer (internal and external) expectations are dragging development into greater and greater maturity. The great thing is that tools and processes are finally mature enough to support this industrialization of software engineering. Today’s processes are standardized, consistent and self-correcting, and tools are available that provide fact-based measurement and insight into these processes and the systems they develop. The largest opportunity for improving quality and productivity during application development is in eliminating its largest sources of waste: defects and the rework they cause. In many organizations, 30% to 50% of the development effort is devoted to rework. These staggering numbers are driven by the fact that defects become 10 times more expensive to fix for each major phase of the software life cycle they slip past. Under these circumstances, quality largely determines productivity. However, by applying lean principles of static analysis to application development, companies can realize significant overall cost and risk reduction – 10% or more – most of which is based solely on waste elimination and driving desired behavior among AD teams.
Keeping application development on track comes down to finding the best visibility over your software, because, in the end, productivity without quality is a waste, quality without productivity is costly and optimal performance is achieved only when both quality and productivity are on the same track.
Ever a man ahead of his time, Albert Einstein once said, “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”
Were he alive today, the only thing he likely would change about his statement would be how World War III would be fought. He surely would look at the threats posed by cyber attacks and surmise the most dangerous weapon of the next world war to be an invisible terror delivered electronically. He would note that the threat could come from any nation state – it would not even have to be a world power – delivered with complete stealth, hit at the most sensitive systems ,cripple infrastructures, topple economies and create chaos — all before even a single soldier was wounded.
The question is, has World War III already begun?
There is no Fate… Two months ago, the Department of Defense (DoD), the organization U.S. citizens probably think should be the most versed in protecting itself against cyber attacks, revealed that it had been the victim of the largest digital attack on a U.S. Government agency when 24,000 sensitive files were pilfered by an unidentified nation state in March.
This revelation came shortly before a report of the General Accounting Office (GAO) that openly questioned the DoD’s ability to keep up with the threats of cyberspace. Among the chief issues identified by the GAO were the multiple and often contradictory government publications that discuss how to handle cyber threats. These documents cannot even come to a consensus regarding terminology and job responsibilities as pointed out in a recent piece on Government Info Security about cyber security, which stated:
GAO cites a U.S. Joint Forces Command report that found DoD employs 18 different cyber position titles across combatant commands to identify cyberspace forces. ‘This can cause confusion in planning for adequate types and numbers of personnel,’ the GAO says. ‘Because career paths and skill sets are scattered across various career identifiers … there are cases in which the same cyber-related term may mean something different among the services.’
So if the government can’t figure out who does what, what means what or just plain “who is this?” how can we honestly expect it to keep everything in this country that is run or managed by a computer system from being shut down by a cyber attack? Per Government Info Security, that would include “7 million computer devices, linked on over 10,000 networks with satellite gateways and commercial circuits that are composed of innumerable devices and components.”
The Battle Has Just Begun
The Intelligence and National Security Alliance this week released a report in response to what it sees as growing concerns of the U.S. government’s ability to defend itself against a major cyber attack. The report calls for the joint factions of the government to engage in coordinated efforts with private enterprise and the educational system to:
…mitigate risks associated with the threat, enhance our ability to assess the effects of cyber intrusion, and streamline cyber security into a more efficient and cost-effective process based on well-informed decisions. Hopefully, these joint entities will realize what the Department of Homeland Security realizes – that any cyber security policy needs to include structural analysis of application software. By identifying areas where the applications in use may not live up to optimal software quality standards, the government can work toward plugging the holes and give cyber infiltration efforts fewer points to breach.
But software is the key. The enemy, to paraphrase “John Connor” from Terminator 3, “is software in cyberspace”…although if the government cannot coordinate its efforts into one, cohesive plan, the even bigger enemy of the U.S. Government’s efforts to protect itself from cyber attack may be the government itself.
In the spirit of “Bull Durham”, “The Natural” and “Field of Dreams”, the upcoming movie, “Moneyball“, looks to be the next great American baseball film. I am excited yet conflicted. I am a big fan of those movies but I happen to be a bigger fan of Michael Lewis’ book upon which the movie is based. And I am concerned that Hollywood will sift past Lewis’ exhaustive research, dodge his insightful observations and a string together a few pieces of Billy Beane’s life in the hopes of creating a romantic sports movie (a spormance).
I can’t stop Hollywood so I direct my plea to you: My plea isn’t for baseball fans to hurry and read the book before seeing the movie. Rather my plea is to business people because, in my view, this is a very important business book. Don’t believe me? Well, for starters, it’s got the word ‘Money’ in the title. Not bad. Secondly, the lessons Billy Beane learns aren’t about how not to hurt the feelings of young baseball prospects. He isn’t concerned about destroying their hopes and dreams when they fail to live up to their hyped potential.
Rather, Billy Beane’s mission is to build a great team for as little money as possible (better and cheaper). He inherits a business that is failing. It is cash flow-constrained, has a brand image problem and customer satisfaction is at an all-time low. In the clean-up process, he realizes he has a broken supply chain (his farm system) that is made up of a bunch of good ol’ boys.
Any of this sound familiar?
Running the Base Line
With his butt on the line, Beane seeks a better way. He quickly realizes that he’s not getting decision-quality information. In fact, it’s mostly opinion and the few scarce points of data (players’ height, weight and batting average) are relatively useless. Desperate, he turns to a math whiz (in real life, Paul dePodesta, and in the movie, the fictitious Peter Brand) for help. Brand determines which data should be gathered, what metrics to monitor and a system that automates it. As a result, Beane is able to accurately predict a player’s future performance based on his historical performance.
Armed with this insight, Beane realizes that he is pursuing players that other teams aren’t. He can, therefore, acquire them more cheaply and create performance-based contracts. In essence, he arrives at a sourcing strategy that gives him a competitive advantage – better product, less cost, better cash flow.
I am writing this from the Gartner Outsourcing & Vendor Management Summit 2011 in Orlando. Florida. As I listen to the discussion about IT outsourcing challenges and strategies, I wonder how many of my fellow attendees will see “Moneyball” and simply wonder if Brad Pitt will win the Best Actor Oscar, and how many would bet on a guy like Peter Brand to help them run their company better?
As many tech industry analysts and those in IT management already know, performing structural analysis of application software is one way of gaining this type of insight into building a better sourcing strategy for their companies. It’s a good bet that their challenges are similar to Beane’s and they need as much help as possible evaluating, selecting and managing their supply chain – their IT systems. Using measures that matter to business, like structural quality and performance, it is essential that businesses evaluate their application software – whether developed internally or purchased from software vendors – and create “performance- based contracts” for a competitive advantage.
Bob Martin, a principal engineer at MITRE systems, returns in this week’s IT Software Quality Report to discuss the role of software managers in mission critical applications with CISQ Director, Dr. Bill Curtis.
Learn the differences between measuring quality & security for physical products versus software. Bob tell us that when discussing “reliability” in software, one needs think about security and how someone might influence or degrade a system to do something that wasn’t anticipated or wanted.
Listen to or download this episode now!