They say “if something works, don’t fix it.” This old adage may be the reason behind why some organizations hold onto legacy systems longer than they should, but it is also the reason why these same organizations struggle with software complexity. In fact, according to the GAO, Uncle Sam spends 80 percent of its $86.4 billion IT budget on legacy systems.
Some organizations choose not to struggle, though. Take for example the story of Pennsylvania-based underwriter NSM Insurance Group. It was reported recently on SearchCIO.com that NSM last year purchased a company that still had a COBOL-based back-office system from the 1990’s.
As the article’s author Mary Platt explains, “The legacy system did the work the acquired company needed, but it required a niche firm to maintain it at a significant cost and, moving forward, it couldn’t handle NSM’s business requirements.”
Fortunately, NSM CIO Brendan O’Malley wasn’t nostalgic about the COBOL developed system. He notes that the decision to replace it was rather clear-cut and straightforward.
Many CIOs scratch their heads in bemusement wondering why their predecessors never bothered to upgrade the system before scrapping them like O’Malley did. Others, however, throw up their hands, say there’s no way to upgrade legacy software applications, and just forge ahead trying to make the best of what is in existence.
This poses a significant problem, though. The average IT manager and most CIOs out there were in high school when COBOL systems were implemented. This means it is highly unlikely that even the most senior members of the IT department will have had experience with the code used to write the legacy applications, further adding to the system’s complexity.
And the issues go beyond just trying to rewrite old code or untangle the system in order to transfer data. Equally complex, if not nearly impossible, is figuring out where the old mistakes were – if nobody knows what’s right, how would they know what’s wrong? This makes finding fixes for old issues problematic at best. Workarounds and just ignoring the issues, hoping they won’t pose a problem down the road, are the most frequent answers, but sidestepping the problem is akin to failing to interview eyewitnesses during a crime investigation. It’s the kind of action that results in the poor structural quality that results in future failures or even crimes being committed (i.e., hacking due to unforeseen security vulnerabilities).
A CIO has two choices: conduct automated code analysis of the legacy application to assess the software complexity and system vulnerabilities before integrating new application on top of it, or transform like NSM did.
Reducing software complexity is one of the technical goals of digital transformation – like the one O’Malley conducted at NSM or the one at the American Cancer Society discussed in a February OnQuality blog – that enable the business goals to be realized. In today’s software-driven business world, Digital Transformation has become an enormous component of business transformation and software risk management. But transformation programs pose their own challenges.
Regardless of whether a CIO opts to stick with the organization’s legacy applications or undergo Digital Transformation, he or she would be wise to employ solutions that gain visibility into obstacles such as lack of visibility, excessive complexity and architectural vulnerabilities. All such software risks can be identified through system-level and architectural analysis solutions like CAST AIP can provide.