6 Hidden Costs of Maintaining an Open Source Code Analyzer Platform

So, you’re ready to get started on building your own multi-language custom source code analyzer platform using open source components.  Your return estimates are still looking pretty good, even after taking into account the costs in our previous post, “6 Hidden Costs of Building Your Own Multi-Language Code Analyzer Platform”.
Well, we have a quick list of maintenance costs that you may not have considered.  So, before you break ground on that project, see if you thought of all these.

Reduce Software Risk through Improved Quality Measures with CAST, TCS and OMG

Webinar Summary
I had the pleasure of moderating a panel discussion with Bill Martorelli, Principal Analyst at Forrester Research Inc; Dr. Richard Mark Soley, Chairman and CEO of Object Management Group (OMG); Siva Ganesan, VP & Global Head of Assurance Services at Tata Consultancy Services (TCS); and Lev Lesokhin, EVP, Strategy & Market Development at CAST.
We focused on industry trends, and specifically discussed how standardizing quality measures can have a big impact on reducing software risk.  This interactive format allowed attendees to hear four distinct perspectives on the challenges and progress that is being made within organizations directly, and also at systems integrators.
Mr. Martorelli started the discussion by providing insight into four powerful dynamics reshaping our ecosystem:

Innovation revolution
As-a-Service as a norm
Changing demographics
Rise of social and mobile

Mr. Martorelli punctuated the importance in preparing for these shifts by highlighting the impact poor quality can have on the business:

Poor performing, unstable applications
Diminished potential for brand loyalty, market share, revenues
Costly outages and unfavorable publicity

Dr. Soley from OMG built on Mr. Martorelli’s observations by discussing how standards bodies, such as OMG, SEI and CISQ, are helping industry respond to these challenges by providing specific standards and guidance to gain visibility into business critical applications, control outsourcers, and benchmark in-house and outsource development teams.
Mr. Martorelli emphasized the focus he has seen at client organizations in shifting quality to the left, and how quality is bleeding into many new stakeholders’ responsibilities.
Some of the trends covered during the discussion included:

Moving test and quality to the left of the waterfall
Addressing architectural sprawl with more architectural and engineering know-how
Seeing quality measurement become an important component of service levels
Emerging combined professional services/managed services offerings
Shifting responsibility for quality management to the business user
Favoring more results-driven approaches over conventional staffing-based testing services

Mr. Ganesan from TCS provided insight into how TCS Assurance Services is evolving to meet these new challenges.  Mr. Ganesan explained TCS’s rationale for evolving beyond code checkers and simple code hygiene and the need to employ automated, structural analysis to provide world class service to their clients and ensure more reliable, high quality deliverables.
We’d like to thank each of our panelists for their time and insight.  We received a high-level of interest from attendees with a lot of questions submitted for our speakers.  Please find a selection of these questions below.   If you’d like to listen to the recording of the webinar, click here.
Q&A
It is clear how one might apply this to new development, but how does one approach applying a code quality metric to an existing portfolio? Would not the changes be overwhelming?
In truth, this is very possible and happens to be a significant non-starter for many organizations.  The sudden accounting of all the potential issues within applications could be perceived as daunting.  However, many solutions have a tendency to generate a lot of ‘noise’ during their analysis.  At CAST, we propose a risk-based approach: one that focuses on the identification of the most critical violations rather than all possible violations. We also focus on the new violations being added, rather than the ones sitting in your systems for years. This way, your critical path during an initial technical assessment of an application or portfolio should focus on identifying the most critical risks.  CAST AIP provides a Transaction-wide Risk Index that displays the different transactions of the application sorted by risk category (performance, robustness or security). By focusing on these violations, you will improve the critical transactions of the application.  Additionally, AIP generates a Propagated Risk Index to illustrate the objects/ rule pairing that will have the biggest impact on improving the overall health of the application or system.  Any analysis without this level of detail and prioritization will certainly create more obstacles than it removes.
How do you see the use of Open Source code changing software risk?
Open Source, just like code developed by your own team or partner, injects risk into systems. And just like any other code, the biggest risk is lack of visibility into that code.  Studies have found that in general open source code is better than industry averages.  Other studies suggest that the quality of the code is a factor of the testing approach of that open source community.  Code that is tested continuously tends to have fewer defects.  It is nearly impossible to suggest that Open Source is more risky.  What is possible is to suggest that receiving code from any source, Open or contracted, without a proper and objective measure of that deliverable adds risk to your systems.
Bill Martorelli mentioned “Technical/Code Debt” as a quality metric; could you explain a little further, please?
The term “Technical Debt”, first defined by Ward Cunningham in 1992, is having a renaissance. A wide variety of ways to define and calculate Technical Debt are emerging.
While the methods may vary, how you define and calculate Technical Debt makes a big difference to the accuracy and utility of the result. Some authors count the need for upgrades as Technical Debt; however this can lead to some very large estimates. At CAST, our calculation of Technical Debt is data-driven, leading to an objective, conservative, and actionable estimate.
We define Technical Debt in an application as the effort required to fix only those problems that are highly likely to cause severe business disruption and remain in the code when an application is released; it does not include all problems, just the most serious ones.
Based on this definition, we estimate that the Technical Debt of an average-sized application of 300,000 lines of code is $1,083,000 – so, a million dollars. For further details on our calculation method and results on the current state of software quality, please see the CRASH Report (CAST Report on Application Software Health).
Here’s a community dedicated to the awareness and education of the topic: http://www.ontechnicaldebt.com
I have heard a lot focused on Quality discussion today, but curious about this group’s perspective on the other component of CAST AIP, function point analysis?
In addition to measuring a system’s quality, the ability to measure the number of function points as well as precise measures of the changes in the number and complexity of all application components makes it possible to accurately measure development team productivity.  Employing CAST AIP as a productivity measurement solution enables:

The calculation of a productivity baseline of either in-house our offshore teams.
The tracking of productivity over time by month or release.
The ability to automatically generate measures of quality and complexity.
The identification of the root cause of process inefficiencies
The capability to measure effectiveness of process improvements.

CAST AIP and the CISQ Automated Function Point Specification: The CISQ Automated Function Point Specification produced by the CISQ team led by David Herron of the David Consulting Group has recently passed an important milestone. CISQ has worked with the OMG Architecture Board to get the specification properly represented in OMG’s existing meta-models. This specification was defined as closely as possible to the IFPUG counting guidelines, while providing the specificity required for automation. This fall it was approved for a 3-month public review on the OMG website. All comments received will be reviewed at the December OMG Technical Meeting, and the relevant OMG boards will vote on approving it as an OMG-supported specification (OMG’s equivalent of a standard). From there, it will undergo OMG’s fast-track process with ISO to have it considered for inclusion in the relevant ISO standard.  We believe this standard will expand the use of Function Point measures by dramatically reducing their cost and improving their consistency.
Is the industry average of production incidents 1 per week and 1 outage per month?/ Are these major incidents and outages for the enterprise?
Here’s a site that provides additional insight into the impact of outages.
 

Don’t Blame the Outsourcer

In my travels, I run into a lot of organizations that are not happy with the performance of their outsourcer. In many cases, the core relationship is the result of a cascade effect. The organization delivered an application that had poor structural quality to begin with, and left the outsourcer with the difficult task of meeting their SLA requirements with a faulty application.
If you want great results from an outsourcer, here’s job one: make sure the application you’re delivering is structurally sound to begin with. Step two: make sure the tools and technologies you use to ensure structural integrity are also part of the outsourcing agreement and, in the end, KPIs in the SLA.
Organizations are expecting a certain level of satisfaction from their outsourcer, and it’s a lot higher than it was in the past. They want cost reduction, without exposure to risk, and they want fast time to market while still remaining flexible. You cannot get all that simply by accelerating your outsourcing contract, because there are a number of risks associated with poor outsourced delivery.
When an organization first approaches outsourcing, they tend to focus more on legacy applications — the ones that they don’t care so much about — to reduce costs. Most organizations have gone through this phase already. When they get more mature, they tend to think more strategic. They think benefits, freeing up resources, concentrating on their business, and finally, getting access to innovation.
They’re typically careful with defining the service level agreements. However, software developers are very uncomfortable with a white line in the contract that specifies the behavior of the software they receive in production. Yet, some still believe that they can go live with the expected results based on some sort of contractual relationship through service level agreements. They don’t have to worry about how it’s done. They just have to worry about the service they get.
This conception, which is still prevalent in a number of shops, might have been true when you only outsourced legacy applications that don’t move much. But it’s absolutely wrong in the kind of outsourcing we do today for fast-moving applications that are important to the business. You simply can’t focus only on cost reduction and rely on operational SLAs. This is why it’s important to know what you have inside the box, even when you hand over the keys to a vendor.
Often when an organization outsources, they continue to rely on testing, but they no longer have control over the product itself. They lose architectural oversight. They don’t see the evolving structure of the software, the technical debt, or the structural software risks which they might have tacitly managed when the software was in house. When they don’t measure the structural quality of the product they’re sending out, they’re exposed to a number of dangers and, while it’s all too easy to point the finger at the vendor, it’s a deeper problem than that.
Structural quality drives 30% of major production defects. And one could argue that more than 50% of all defects are originated in structural quality (if you look at the root cause analysis). Beyond the focus on functionality you get with testing, recent news has certainly showed us that we need to keep a vigilant eye on resilience, performance, and security risk.
By the time such risks materialize, the business impact is often much more than the contractual penalty to the vendor. That penalty may not even be fair, because it could have been a structural flaw in the software to begin with. Unfortunately, most ADM SLAs are based on reactive approaches to managing technical acceptance, rather than a proactive management of the risk. And the business suffers the impact.
If the vendor is getting a badly-constructed application to begin with, their hands are tied. They probably analyzed the help desk tickets or did some code reviews on a spot basis to see what they are taking over, but most of the time there is a fair amount of guesswork. This is an unfair process towards the vendor and the outsourcing relationship.
You can see in one of our recent surveys that almost 70% of the respondents terminated an outsourcing contract because of lack of quality of service. Some of those contracts could have been terminated for the wrong reasons, because it was the application itself that was at fault for poor performance. So the next time your organization thinks about outsourcing, first ask yourself: Am I delivering a structurally sound application to my outsourcer?

Great Expectations and How to Meet Them

There’s a very old mantra around project quality that says, “If you want something done right, do it yourself.”
I disagree.
We recently remodeled the bathroom in our master bedroom. Rather than taking my own sledgehammer to the walls, tub and toilet and then hanging my own sheet rock, my wife and I hired a local contractor who came in, did the demolition and reconstruction, and in the end we wound up with a room with which we’re very happy.
I can tell you without reservation that had I done it myself the project would have turned out disastrous because I confess to a certain measure of incompetence when it comes to carpentry…and plumbing…and electrical systems…and just about every other discipline that goes into rebuilding a bathroom.
I guess you could say we had “great expectations” and knew that to achieve them we needed to find someone else to do the job.
Loosing Control
I suspect that this lack of capability in terms of doing something yourself does not always extend to companies when they choose to outsource software builds, but there is some measure of it. The decision to outsource usually comes down to one of two reasons – a company doesn’t have the time to do it or feels an outside group can do it better.
This decision to outsource is being made by an increasingly large segment of the business community. As was recently noted on The Outsourcing Blog, “the public and private sectors alike are becoming increasingly reliant on third-party suppliers to effectively operate.”
What is a bit off-putting, however, is the claim made in that post that, “that some 64% of third-parties fail to meet stakeholder expectations and contractual commitments, according to recent research we have undertaken.”
The fact of the matter is, regardless of where a company chooses to outsource, there is a certain relinquishment of control. It is simply neither possible, nor desirable to hold tightly to the reins of all aspects of an outsourced project. When the outsourced project has an offshored element, the potential increase in benefits is met with an equivalent set of risks. Cultural differences and distance alone significantly contribute to increasing both the risks and management costs.
Much of this can be attributed to the fact that organizations have not previously had the means to assess application software quality in real-time when its development has been outsourced. QA plan compliance checks, while useful in some capacities, are normally performed via random manual code reviews and inspections by QA staff. For a typical one million-line-of-code J2EE distributed application, there is significant risk that key issues will go overlooked. Furthermore, standard functional and technical acceptance testing is simply insufficient at detecting severe coding defects that may have impact on the reliability and maintainability of an application. Finally, in the current geopolitical context, programming vulnerabilities, or even hazardous code in a mission-critical application, could easily produce disasters in production – data corruption or losses, system downtime at crucial moments – all of which negatively affect the business operations.
Unfortunately, most IT organizations have chosen to leave the technical compliance issues aside, due to either limited resources are scarce or a lack of the required skills. Instead, they all too frequently assume that tersely worded SLAs will be enough to protect them over time. In reality, while today’s SLAs routinely include financial penalty clauses, fines and legal battles, they are not all that effective in preventing system failures.
Get it Right
In order to be successful, companies need to acquire and deploy software solutions that help manage these global partnerships by providing greater insight into the build process through real-time access to objective data. Employing a platform of automated analysis and measurement to assess the application as it is being built, for instance, affords transparency into the outsourced work, grants discipline into how information is handled and yields metrics to evaluate results.
With that kind of real-time access and information into how a company’s software is being built or customized, it won’t matter if the outsourcer is across the hall, across the street or across the ocean. You will always know just where your software is and if the outsourcer is building it efficiently and up to your high application software quality standards. Not having that kind of insight could lead to software issues that would scare the Dickens out of you!

Done Off-Site, Done Right

In 1807, French playwright Charles-Guillaume Étienne penned the famous line, “On n’est jamais si bien servi que par soi-même.”
For those who do not speak French, you may recognize this now idiomatic phrase as the oft uttered, “If you want something done right, do it yourself.”
Étienne’s words are a proclamation of self-reliance commensurate with the attitude of the French Revolutionary period during which he earned his acclaim; however, they are quite obviously not a hard and fast rule among businesses today. In today’s world, many companies that want “something done right” – including the development of software – look overseas for other companies to do it right for them.
Historically, outsourcing projects have been viewed as a difficult and even tenuous proposition. Variances in how work is conducted and language differences (like the one eluded to above) are seen as things that need to be overcome in order to make outsourcing work…and often the reason for making it work is simply that it costs less to do it overseas.
There are those who question whether it truly does cost less having it done overseas if these problems must be overcome or at least worked around. Others, however,  perform their due diligence, not only before deciding to outsource, but also during and after the project has been done.
Ask the Questions
The decision of whether or not to outsource comes down to one question, “Can another company do it more efficiently than your own?” Curt Finch, CEO of payroll automation company Journyx, offers some advice on the Executive Street blog about how to make the decision of whether or not to outsource software development. He makes it sound very simple:
Answer four questions:
1. How much?
2. How long?
3. How risky?
4. How strategic?
Good questions.
The questions of “how much” and “how long” are rather straight-forward and objective – either the potential outsourcer can do it cheaper and quicker or they can’t. Even “how risky” under Finch’s definition – that being how solid the company is and how it is viewed by previous customers – comes down to figures (Finch points to stock price) and real, albeit anecdotal data.
Unlike the first three questions, however, “How strategic?” is a far more subjective question to answer. By Finch’s admission, this question raises more questions:
“What will your IT shop learn from building this application in-house?  Is this knowledge coherent with your company’s core business strategy? Will the education your team gains from this exercise lead to improved capability for your company’s business, or is it detracting from more appropriate knowledge?”
He admits these are hard questions to answer…or are they?
Get the Answers
Most people believe that outsourcing is akin to off-loading and if that is the plan of the company shopping the project for outsourcing then the chances are pretty good the project will turn out pretty bad. Taking a hands-off approach to managing an offshore outsourcing project – by relying on SLAs, for example – and expecting a high-quality output is not only unrealistic, it’s also unfair. Rather, close management — or, even better, increased visibility into the project using application software structural analysis — is critical to achieve the desired result.
To achieve the necessary visibility into the project and in so doing also achieve the strategic value sought in outsourcing the project, a company should consider implementing a platform of automated analysis and measurement to perform strategic structural analysis at each stage of the build.
The next-best thing to hands-on management, structural analysis provides the visibility critical to catching code imperfections in preproduction phase, before the application is deployed and causes costly and inconvenient outages or compromises security. With this hands-on approach to outsourcing, companies can realistically expect performance equal to what they can produce in-house.
That kind of visibility makes the most difficult of the outsourcing questions much easier to answer because the development of software is neither out of sight, nor out of mind, but rather it is simply software done right!

For Whom the Bell Curve Tolls

As an IT executive, how do you make sure you consistently deliver good results and help the business innovate? How do you do it when you are relying on your vendors to get 80% of your work done? These topics were top of mind at the Forrester Sourcing & Vendor Management Forum I just attended. In the past couple years I’ve had many conversations with IT executives about vendor management, about productivity, about quality and overall about improving large app dev organizations with many moving parts. There are many approaches that come up – process improvement, better governance, introducing measurement, or replacing your vendor. This is not an easy problem and it’s not unusual for the conversation to land somewhere like: “we just need to get better people” or, “it’s all about the people running our projects” or “I just hired the vendor who’s known for paying a little more than the others.”
Wrong answer!
We’ve all heard about that mythical rock star developer who can do the work of 20 mediocre developers. We’ve all seen how incredibly effective a good project manager is, compared to a not-so-good PM. And, as managers, we naturally want to get the best team we can to work for us. Of course, if everyone could pay as much as Wall Street does for their software talent, then everyone would have their projects and their quality under control… Definitely wrong!
In statistics, we have this convenient little concept called the bell curve. If you’re reading this post, I’m sure you’ve heard of it. The bell curve is actually very powerful at describing most groups of things, or people. Yes, I know. Not everything in life looks like a bell curve – sometimes you have to find the right distribution based on empirical studies of past results. This is of course lots of fun, but when studying IT talent I posit that we don’t need to. As much as we all want to feel unique, most human characteristics can be modeled by a normal distribution. Want to line up US males by height? That happens to be a bell curve distribution. The mean is 5’10” and the standard deviation is 3”. That means that 2/3 of US males are between 5’7” and 6’1” in height. The average fastball speed in the Major League is 92 miles per hour, with a standard deviation of about 4 miles per hour. You get the idea.
The developer talent pool out there also follows a distribution. There’s little you can do to go against the laws of nature, or I should say too little that’s typically done to change that (probably a topic for another blog post). Nobody publishes developer capability distributions (yet!) but let’s take one very important input to developer capability: raw smarts. There is lots of data out there about IQ, and guess what!? That’s also a normal distribution. Here’s one such data point from a medical school up in Canada:

 
http://meds.queensu.ca/courses/assets/modules/types-of-data/symmetrical_and_asymmetrical_data.html
We can probably make some assumptions that the guys at those bottom rungs of the IQ ladder flunk out of high school, can’t read, or get intimidated by the first “Hello World” program they have to write in Basic. You can also assume that Google, Microsoft and the NSA have the top end. So you can remove the light-blue and the grey “tails” of the curve and you and your vendors are recruiting from something that still looks very much like the bell curve.
Maybe you can pay a little more for some star developers, but I would venture to guess that any organization of more than about 50 developers starts to look like the picture above. So, most of the people touching the code that runs your business processes are somewhere in that big, blue middle of the distribution. If you’re lucky, that’s what your vendors look like too.
One of my more memorable managers used to say “the best team is the one you got.” It had taken me a while to understand what he meant. But, in the ADM world, all the talk about improvement by having good people, especially in the context of bringing on vendors, is all wishful thinking. As an IT manager, if you’re not putting some training, processes and measurement in place to help the organization perform better, you might be inadvertently placing yourself somewhere on the left side of the ‘IT management’ bell curve.
 
 
 

Structural Quality Metrics in Outsourcing SLAs

When I speak to customers and prospects trying to incorporate static code analysis into their software development processes, one of the most common questions that I get is “How do we incorporate the outputs of static analysis into SLAS?” Given the prevalence of outsourcing in Fortune 500 and Global 1000 companies, this question is not surprising. Companies have always struggled to measure the quality of products being delivered, beyond the typical defect densities measured after the fact.
To help organizations answer this and similar questions, I thought I would compile some frequently asked questions around introducing Structural Quality Metrics into SLAs.  Before I get into the details, I want to caution readers against using these metrics simply for monitoring, or as a tool to penalize vendors.  This approach invariably becomes counterproductive, and instead I recommend looking at these metrics as an opportunity to make the vendor-client relationship more transparent and fact-based—a win-win on both sides.
In addition to the FAQ’s below, for more on this topic you don’t want to miss our next webinar on May 16th.  We are pleased to have Stephanie Moore, Vice President and Principal Analyst with Forrester Research discussing how to “Ensure Application Quality with Vendor Management Vigilance.”  You can register here.
What kind of structural quality metrics can be included in SLAs?

Quality Indices: Code analysis solutions parse the source code and identify code patterns (rules) which could lead to
potential defects. By categorizing these improper code patterns into application health factors such as Security, Performance, Robustness, Changeability and Transferability, you can aggregate and assign a specific value to each category, like the Quality Index in the CAST Application Intelligence Platform (AIP). You should set a baseline for each of these health factors and monitor the overall health of the over time.
Specific Rules: Quality indices provide a macro picture of the structural quality of the application, however there are often
specific code patterns (rules) that you want to avoid.  For example, if the application is already suffering from performance issues, you want to make sure to avoid any rule that would further degrade the performance. These specific rules should be incorporated into SLAs as “Critical Rules” with Zero Tolerances.
Productivity: Amount charged per KLOC (kilo lines of code) or per Function Point. Static analysis solutions should provide the size of the code base that is added in a given release.  Along with KLOC, CAST AIP provides data on the number of Function Points that have been modified, added and deleted in a release.  This is a very good metric, specially in a multi-vendor scenario where you can see how different vendors are charging you and can set targets and monitor productivity for each vendor.

How do you set targets for Structural Quality Metrics?
The ideal way to set targets is to analyze your applications for a minimum of two to three releases and use the average scores as a baseline.
An alternative method is to use industry benchmark data.  CAST maintains data from hundreds of companies across different technologies and industries in a benchmarking repository called Appmarq, and it can be used to set targets based on industry averages or best-in-class performers.
When do you introduce Structural Quality Metrics into an SLA?
Of course, the best time to introduce Structural Quality Metrics into SLAs is at the beginning of the contract, when it is the easiest to set expectations on quality objectives based on the static analysis solution outputs.  However, if you are in the middle of a long-term contract with a vendor, you can try to make changes to the existing SLAs. A situation like this will require collaboration with the vendor to define common goals on why, how and when to use a static code analysis solution and what kind of metrics make the most sense in the context of those goals.
To hear an analyst perspective on achieving maturity in your outsourcing relationships, don’t forget to register for our webinar on May 16th with Forrester Analyst Stephanie Moore.