Executive Dinner Series: Managing Software Risk within the Insurance Industry

It is becoming more and more obvious that the software risks and complexity that face today’s legacy systems is a growing problem for many IT organizations. Are these legacy applications “Ever going to be replaced or retired?  Are they “Too Big to Fail”? Added concern around many of these applications involves their size and architectural interconnectivity whereby their failure would prove disastrous to the entire business. There are few industries where this is more evident than the insurance industry.  At our recent  IT Executive Dinner with stakeholders from the Insurance industry, conversations were centered around application modernization, legacy application rationalization, and the funding mechanisms Insurance IT organizations use to improve their application assets.

Patrolling for Issues in Legacy Apps

It’s not uncommon for organizations to hold onto their application software and IT systems longer than they should. This is particularly true for government agencies – Federal, state and local. When you combine an “if it ain’t broke, don’t fix it” mentality with budget cuts and comfort levels of staffers, there is little impetus for change.
Clifford Gronauer, CIO of the Missouri State Highway Patrol, discovered just such a system last year. Gronauer was charged with upgrading the patrol’s aging IT system. Upon vetting the scope of the project, he found an antiquated system of mainframe-based legacy applications that dated back to the 1970’s!
The project turned into what Gronauer termed a “perfect storm” of upgrades that forced him to alter his plans from upgrading the system piece-by-piece to doing a complete overhaul broken into larger phases. On the bright side, he stumbled upon a Federal grant that would pay for the project and, in the end, the task earned him recognition as a finalist for the 2011 MIT Sloan CIO Symposium Award for Innovation Leadership.
I can only imagine Gronauer’s reaction when he realized the enormity of the fix that was going to need to happen. It must have been something akin to the one Roy Scheider’s character had in the original “Jaws” film when he first laid eyes upon the monstrous great white – “We’re gonna need a bigger boat.”
Digging for Clues
Dealing with legacy applications is never fun; in fact, it probably leaves many CIOs scratching their heads and wondering why their predecessors never bothered to upgrade the system. Since there’s no way to retroactively upgrade the application software, they have no choice to move ahead and make the best of what is in existence.
This poses a significant problem, though. The average IT manager and most CIOs out there are around my age, and I was in grade school when the Missouri State Patrol’s old system was implemented. This means it’s highly unlikely that even the most senior members of the IT department will have had experience with the code used to write the legacy apps.
The problem this unfamiliarity goes beyond just trying to rewrite old code or untangle the system in order to transfer data. Equally complex, if not nearly impossible, is figuring out where the old mistakes were – if nobody knows what’s right, how would they know what’s wrong? This makes finding fixes for old issues problematic at best. Workarounds and just ignoring the issues, hoping they won’t pose a problem down the road, are the most frequent answers, but sidestepping the problem is akin to failing to interview eyewitnesses during a crime investigation. It’s the kind of action that results in the poor structural quality that results in future failures or even crimes crimes being committed (i.e., hacking due to unforeseen security vulnerabilities).
Identifying the Culprit
Since dumb luck is no way to establish a foundation for a new or upgraded IT system, a company building up from a system of legacy apps needs to analyze what it has fully and then continually assess the build as it is being done.
Manual analysis of any application software build is cumbersome, time consuming and highly inefficient – like finding a single needle in 4,000 haystacks. Multiply that difficulty by the fact that the person doing the manual analysis of the legacy app probably doesn’t even know what the needle looks like and the chances of finding the culprit become infinitely small.
On the other hand, an automated assessment platform can conduct an investigation of hundreds of thousands of lines of code quicker and with a far better understanding of what it is looking for. By automating the process of static analysis, companies can ferret out offending legacy code and give those responsible for the upgrade a solid structure upon which to build. And employing this same platform of automated analysis and measurement to conduct continual architectural and code component reviews to find any new issues that arise ensures what is being build atop the legacy application interacts properly with the existing code.
This level of attention to structural quality is crucial in the constant fight of the IT department to eliminate outages and security vulnerabilities. So crucial in fact, that failing to conduct automated assessment when building on top of a legacy application should certainly be considered a crime.