It is becoming more and more obvious that the software risks and complexity that face today’s legacy systems is a growing problem for many IT organizations. Are these legacy applications “Ever going to be replaced or retired? Are they “Too Big to Fail”? Added concern around many of these applications involves their size and architectural interconnectivity whereby their failure would prove disastrous to the entire business. There are few industries where this is more evident than the insurance industry. At our recent IT Executive Dinner with stakeholders from the Insurance industry, conversations were centered around application modernization, legacy application rationalization, and the funding mechanisms Insurance IT organizations use to improve their application assets.
We just finished up the 30-minute webinar where Dr. Bill Curtis, our Chief Scientist, described some of the findings that are about to be published by CAST Research Labs. The CRASH (CAST Research on Application Software Health) report for 2014 is chock full of new data on software risk, code quality and technical debt. We expect the initial CRASH report to be produced in the next month, and based on some of the inquiries we’ve received so far, we will probably see a number of smaller follow-up studies come out of the 2014 CRASH data.
This year’s CRASH data that we saw Bill present is based on 1316 applications, comprising 706 million lines of code – a pretty large subset of the overall Appmarq repository. This means the average application in the sample was 536 KLOC. We’re talking big data for BIG apps here. This is by far the biggest repository of enterprise IT code quality and technical debt research data. Some of the findings presented included correlations between the health factors – we learned that Performance Efficiency is pretty uncorrelated to other health factors and that Security is highly correlated to software Robustness. We also saw how the health factor scores were distributed across the sample set and the differences in structural code quality by outsourcing, offshoring, Agile and CMMI level.
For many CIOs, reporting on software risk is a complex problem. The reports are usually compiled once a quarter, and can take days if not weeks to complete. But worse than that, they often fail to deliver actionable insight to answer simple business questions. Which of my critical systems are most vulnerable? Are my IT vendors delivering as promised? How can we improve customer satisfaction? Are my development teams under-performing? How can we improve time-to-market for new projects?
Pay attention US financial sector, because the UK is one step ahead of you … sort of. They’re at least willing to admit they have a problem with software risk and IT system resiliency, which is on the path to recovery.
A recent report published by Tech Market View confirmed a 2012 warning by a director of the Prudential Regulatory Authority that the IT systems of UK banks were “antiquated.” and that he could not say with confidence that they are robust. The statements were delivered to a committee in Northern Ireland as they discussed the major IT failure at RBS/Ulster Bank in 2012 which affected the bank’s customers all over the world.
Because the world of software development is so incredibly complex and modular, quality assurance and testing for software risk has become costly, time-consuming, and at times, inefficient. That’s why many organizations are turning towards a risk-based testing model that can identify problem areas in the code before it’s moved from development to testing. But be careful, because hidden risks can still exist if you don’t implement the model properly throughout your organization.