Data-driven IT resource planning: Take control of ever-growing software maintenance costs

For many IT-intensive enterprises, the bloating cost of maintaining software applications may be the biggest elephant in the room. Software maintenance costs typically comprise up to 75% of the total cost of ownership of each application. With so much investment and energy dedicated to keeping the lights on, finding a way to better allocate IT resources — even just by a marginal amount — can have significant impact on the enterprise’s capacity to innovate.
 
CAST’s research into this area has uncovered some provocative findings. As we’ve discussed previously on the On Quality blog, the cost of maintaining a software application is directly proportional to its size and complexity. IT organizations can take several steps using static code quality analysis to reduce size and complexity, and thus diminish their software maintenance costs.

The Tech Babel Fish for CFOs

Any advocate for better software quality knows that one of the biggest challenges is helping the CIO reach the CFO. When your team needs a budget for an important project, those conversations often break down. Thanks to the unavoidable technical complexity of IT, oftentimes the CIO might as well be speaking Esperanto to the CFO.
When it comes to budgeting, IT might be the least-understood department in your organization. And what the CFO doesn’t understand, he doesn’t budget for. Instead, capital that should rightfully go towards IT growth and innovation is allocated to other groups and initiatives. That dulls the organization’s competitive edge, and can have a toxic effect on system quality overall.
This is why I advocate software estimation as a budget-winning process for IT leaders. It clearly correlates software quality and technical debt in ways that a CFO or CEO can understand. “Technical debt” is a useful term that helps people outside of IT understand that application risk can be measured, and has a cost that gets paid for one way or the other.
The difficult part comes in where the rubber meets the road. Your CIO has intimate knowledge of the inner-workings of your IT department; you just need to equip him with the proper metrics to interface with the CFO.
Rather than getting technical, the CIO must decode what the IT teams do and translate it into the language of planning and budgeting — with a focus on being responsive, adding new capabilities, and reducing maintenance costs and risk per head. This is one place where our technology can help — with metrics like:

Software maintenance effort over time. This metric tracks the estimated software maintenance effort of your most critical applications broken down over fiscal quarters. It gives you an immediately identifiable visual into which applications require the most upkeep, and which are actually becoming more efficient.

Change in risk and size over the last four quarters. This report shows how many applications increased or reduced their risk to the organization; and also shows which applications increased or decreased in size. A great way to tell if your bloated applications are becoming a risk to your structural quality.

Estimated vs. planned maintenance effort. This is another great metric which compares the planned maintenance per application against the estimated effort. The application size, number and type of technologies, complexity, and quality are all drivers of the estimated maintenance effort.

Top 10 applications by high risk technical debt. This might be the most telling metric to bring to your CFO. This report shows the proportion of technical debt in your application portfolio that’s driven by dangerous coding patterns and should be addressed first to minimize business risk exposure.

With all the dimensional views an organization can get through our Application Intelligence Platform and Highlight reports, they can boil down high bandwidth conversations to a place where finance and IT can intersect. And armed with those key KPIs, your CIO will have rock solid metrics — in the CFO’s language — that can foster a dialogue both can understand.

Wrapping Up Our ADM Discussion

There were so many great questions from attendees after the “Aligning Vendor SLAs with Long-Term Value” webinar that I moderated last week that we’ve compiled them here for you. Whether you participated in the webinar or not, I’m sure you’ll find the questions — and answers — fascinating. Plus, don’t forget to check out the results from the real-time poll we conducted during the webinar!
If you participated in the webinar but just didn’t get around to asking a question, feel free to email me at steven.hall@isg–one.com
Q. Are you seeing the use of Agile in maintenance as well as development?
A. We are seeing Agile grow in global development projects but not in the maintenance area. Minor maintenance enhancements still tend to be bundled as small releases or included as part of a development release.
Q. Can you give an example how do you use Schedule and Cost Index to calculate Earned Value?
A. CPI (Cost Performance Index) and SPI (Schedule Performance Index) are both outputs from the Earned Value process.  Both of the calculations will provide excellent insight into the overall state of the project. Earned Value Project Management (Fleming and Koppelman) and Managing Global Development Risk (Hussey and Hall) both address how to use Earned Value to manage large scale, global projects.
Q.  What is your view of standards such as ISO20000 for service management or 9001 etc.?
A. We are seeing much broader adoption of ITIL V3 Service Management and Service Integration and Service Management are both becoming significant market trends in the outsourcing space.
Q. Do you see any trends in the industry toward near-shoring?
A. Near-shoring and domestic sourcing have grown over the past several years.  ISG has published multiple articles on this subject, as well as the whitepaper “Gone Country-Rural Outsourcing Gains Ground.”
Q. Does it make sense to have another supplier test defects? Or can we put the right checks in place to have the same supplier test their own work?
A. We’ve seen growing trend towards Testing Centers of Excellence (CoE) that are improving the overall testing process through enhanced automation and focus on testing disciplines. Scale is important, though. If the testing organization is less than 150 FTEs, then it is often best to combine with the service provider doing the development.
Q.  Don’t most SIs need to build in a “Risk Premium” into their bids/proposals…to cover them from unknown structural/technical risk?  Is there a way to reduce costs by reducing that premium if a customer had a good baseline on the technical quality to share?
A.  We believe there can be significant savings in the Run the Business (RtB) or maintenance costs if structural quality is defined and measured in earlier phases. The importance of software quality required is usually defined by the potential consequences of a failure due to poor quality; it therefore depends on specific business objectives and tradeoffs between long-term maintainability and immediate robustness, efficiency and security. Hence, quality objectives and thresholds should, as much as possible, be determined by business lines and project management teams depending on the specific context and business, technical, and functional objectives. Ideally, for every system, “expected targets” are defined as quality scores to achieve, or surpass, and “minimum thresholds” as a limit below which ADM must not go. All the quality indicators must be base lined, by analyzing the applications before the Expected and Threshold Targets are set. Then, based on the baseline Quality Indicators and requirements of the business, Expected & Threshold Targets are agreed upon by end‐customers and their service provider. There is strong market evidence that process discipline reduces re-work, this same rationale should certainly apply to structural quality as well.
Q. How do you see Agile methods affecting outsourcing contracts and relationships? Better, worse, or no difference?
A. Overall better. I have personally led several very large development projects using agile methodologies in a global delivery model and these were highly successful. My thinking has changed in the area over the years, though, as I’ve seen the investments made by so many service providers to better align their teams to Agile models.  The fact that you can conduct global Scrums and use shared workspaces and tools to manage requirements and build cycles provides significant advantages in a truly global model. The trick is to have development and design teams at all global locations and have teams at each location that are empowered to make decisions. We had client and supplier personnel at each other’s facilities and teams were integrated throughout the entire lifecycle. This is very different from the traditional onsite/offshore model, but it is extremely effective.
ISG is adding agile methodology language to our contractual exhibits, including Statements of Work, Service Levels, and pricing documents.
Q. How would you weight the importance of each primitive metric?
A. It is difficult to weight them without understanding the context. Each primitive often drives different management decisions from overall quality, application footprints, productivity improvements, etc. The key is to understand what behaviors you want to incent and use the SLAs associated with the primitive metrics to align behaviors with measurable results.
Q. If you recommend test centers of excellence and separate vendors for test, how do you deal with the tension between this and an Agile methodology, which works best with team ethic and co-location of developers and testers in the same Scrum?
A. Agile puts a different spin on Testing CoEs. I haven’t seen an Agile project use a separate testing organization. However, most organizations are still using blended delivery models, with a small sample of projects using Agile methodologies, so there still seems to be value in developing Testing CoEs.
Q. Is testing required for an application if we align CAST?
A. Yes, functional testing is still required even if CAST AIP is fully implemented. Functional testing tools ensure that an application meets the defined requirements for each. The International Organization for Standardization (ISO) calls this “testing for external quality.” External quality measures the behavior of the system and is only possible during the testing and production phases of the lifecycle. It is performed by executing the software in a system environment. Neither manual nor automated (tool-based) testing generally look at the internal quality of an application to identify if the application will withstand changes over  time or if the architecture is sound, but rather they focus on ensuring that it does what it is expected to do today. Even though quality assurance tool vendors guarantee that they support testing across the application development lifecycle, it is still necessary to wait until the integration testing occurs to see what effects the overall system may incur.
Q. Is there any industry level efficiency/savings when ADM is managed end-to-end by a single vendor? Compared to multi-vendor, is it better?
A. This is the classic, “it depends” question. In general, we do not see significant differences in efficiencies that lead to costs savings from service providers in a single vs. multi-supplier model. In fact, we often see better pricing, relationship management, and productivity in a multi-supplier model because all suppliers know they have to be competitive every day on accounts that are truly multi-supplier. However, we do see higher retained costs and governance complexities as organizations manage more complex deals with multiple suppliers.
Q. Like FPs or UCPs used in development projects, what measures are typically used for measuring productivity in Apps Support or Production Support areas?
A. Function points can be used in some mature environments to determine the estimated maintenance efforts. We have also developed some interesting complexity matrices that take in account functions points, average # of instances, defect backlog, number of interfaces, number of users, response/resolution times, and availability times. These models look very promising. There is an interesting article on function points and their impact on development and maintenance called, “A Cease Fire in the Function Point Holy War.”
Q. What is a reasonable number of SLAs to manage in maintenance?
A. It’s important to balance the number of maintenance SLAs with the key behaviors you want to incent. In general, you should incent Responsive to Incidents, Timely Resolution of Incidents/Problems, Preventive Maintenance (or reducing recurring problems), and backlog or queue management of lower level issues.
Q. Why and how do you measure EVA in an outsourced environment, as the staff don’t work for the client organization (they work for the provider)?
A. The core EVA metrics, CPI (Cost Performance Index) and SPI (Schedule Performance Index) should be captured and measured by the Service Provider managing the project. If the client is providing the PM support, then Service Providers should be obligated to provide EVA metrics on a weekly basis. Earned Value Project Management (Fleming and Koppelman) and Managing Global Development Risk (Hussey and Hall) both address how to use Earned Value to manage large scale, global projects.
In case you missed it, the full webinar recording can be found here.
 

The Impact of Outsourcing on ADM

Last week, Steve Hall, Partner & Managing Director at ISG (formerly TPI), presented a webinar on the topic of aligning vendor SLAs with long-term value. The discussion focused on the need to not only consider cost savings within ADM (Application Development & Maintenance), but also the importance of risk mitigation and value enhancement of vendor-client relationships.
As businesses increasingly look to shift from a “Run the Business” (RtB) model to a “Change the Business” (CtB) perspective, broader adoption of software tools to provide automated function points counts and technical insights, as well as increase application structural quality, are critical to successfully moving to the CtB model. To read more on the topic, check out Steve’s post.
We enhanced the webinar discussion with real-time polling of the attendees. Here are the polls conducted during the webinar and the corresponding responses:
With how many ADM vendors does your organization or your client work?
These findings support one of the trends Steve discussed: multi-sourcing. Organizations are increasingly segmenting their outsourcing strategy rather than using a monolithic approach. The approaches to multi-sourcing vary from functional or business unit to specialties by technology. Nonetheless, the outsourcing landscape is becoming increasingly complex and both clients and vendors must work together to coexist and delineate roles and responsibilities.
Do your outsourcing/customer SLAs include metrics on the technical quality of code being delivered?
More than half of the respondents are looking at measuring structural quality in their SLAs, but only 26% already include quality metrics as part of their formal agreements with their vendors. As the complexity of the outsourcing mix increases and vendors are asked to coexist with each other or as embedded roles within clients, quality-based SLAs can be powerful monitoring solutions that add transparency, accountability and objective assessment of work performed. Transparency is essential for organizations to create mutually beneficial environments for their internal clients and service providers.
How often do you or your clients perform code reviews on vendor-delivered code?
Although not institutionalized, the industry is holding vendor deliverables to a quality standard through manual or automated code review processes. As quality-based SLAs become more prevalent, organizations will need to find ways to enforce them.  Traditional or manual code review processes will quickly become a bottleneck and introduce subjectivity to a business process that will need to be objective and repeatable to keep pace with today’s development environment.

Top 5 ADM Sourcing Success Factors

While cost savings remains a top priority for global Application Development and Maintenance (ADM) organizations, businesses are also increasingly looking to shift from a “Run the Business” model to a “Change the Business” (CtB) perspective, whereby ADM resources are focused on enabling transformational enterprise-wide initiatives, rather than merely supporting the status quo and reacting to small enhancements.
The demands associated with a CtB approach, however, can increase the risk of higher defect rates and maintenance costs, as well as poor documentation and weak standards around performance and process measures.
Here are five key strategies to ensuring that a CtB ADM model is implemented with optimal efficiency.

Separate and manage service categories. Delineation of the major categories of ADM is essential to efficient management. As the old adage goes, you can’t manage what you can’t measure — it is imperative for organizations to understand their application support costs.  Specifically, you need to define three main categories – Maintenance, Minor Enhancement, and Project-Based Work – and manage costs of each of those categories separately. The reason is that cost drivers and performance indicators vary by category, so different actions and priorities may be required for each.
Align the retained organization with the strategy. Many ADM organizations fall into the trap of focusing on what they can do, rather than what the business needs. For a CtB strategy, that’s a recipe for disaster. Key roles and attributes to help ensure that ADM activity stays on track include liaisons whose specific role is to coordinate between business and IT leaders; program/project managers to oversee large programs; client-driven architecture; ongoing business analysis to understand and communicate business needs; service delivery managers; and business subject matter experts to work with developers and testers.
Define end-to-end Service Level Agreements. Multi-supplier environments are the norm in today’s outsourcing environment.  The multi-supplier environment provides significant advantages to clients, but also introduces complexities when trying to govern a large outsourcing arrangement or manage a large software development program.  Service Level Agreements are an effective way to incent the right behaviors in a multi-supplier environment, but they must be built to address the entire project lifecycle and clearly aligned with the responsibilities of the suppliers.
Embrace process excellence. Service Levels and quality goals are often aligned with process maturity – and more mature organizations can be contractually obligated to deliver higher quality code.  Since process adherence drives repeatability and productivity improvements, service providers should be incented to operate at CMMi Level 3 or higher, and to commit to productivity improvements.  Conduct on-going operational assessments to ensure process adherence.
Use Key Performance Indicators (KPIs) and dashboards. Gauge performance on an ongoing basis with tools such as an Earned Value Analysis (EVA) that includes a schedule and cost performance index.  To stay on top of defects, employ other quality metrics such as:

Defect injection and probability statistics for software quality
Structural Quality tools and techniques to validate the longer term supportability of the code
Regression testing, defect detection, and effective test coverage ratios to validate the functional quality of the deliverables

Editor’s Note: On February 16th, at 11 a.m. Eastern, Steve Hall will be the featured guest speaker on a webinar hosted by CAST Software on the topic of aligning vendor SLAs with long-term value.  The discussion will focus on risk mitigation and value enhancement from vendor relationships. Key points will include:

Building healthier and transparent relationships with vendors based on practical, meaningful metrics
Avoiding vendor lock-in by making sure your applications can be easily transferred and quickly understood from one team or vendor to another
Improving production support activities by focusing on application quality
Aligning metrics between vendor management and project managers

To register now for this free, online webinar, please click here.