Join Fellow CIOs & Executives for a Session on Software Risk Management

Consider this an invitation….to find out how you can significantly reduce the risk that exists within your applications.
With data centers growing from dozens of single servers to hundreds or thousands of virtual servers distributed throughout the globe with software that has to accommodate such large scales, managing risk has never been so important. Software development today uses shorter cycles, continuous delivery, and agile techniques that can create additional risk.

Launch Party Wrap-Up: Software Risk Management Goes to Broadway

With the cost of U.S. data breaches increasing nine percent from last year, and the news of Target CEO Gregg Steinhafel announcing his resignation amidst the fallout of their massive credit card breach, every IT organization has software risk management top of mind in 2014.
Last month, at The Art Directors Club in New York, we held an application risk launch party for our new Application Intelligence Platform equipped with a sophisticated Application Analytics Dashboard. The updated interface will equip IT executives with actionable insight into the security, performance, robustness, and changeability of their most critical business applications across their portfolio.

CIOs Must Take Stock!

You’ve taken the obvious steps to cut costs in your application portfolio, so where do you go next?  With a large, dispersed IT infrastructure and systems that operate in silos, often with duplicative functionality, it’s not necessary to take on your portfolio in a single bite.
To get started, you simply need answers to basic questions that will define the direction that the organization takes to optimize the value of its IT portfolio such as:

What do we own?
Is it healthy?
Is it redundant?
Is it being used?
Is it necessary?

In our new ebook, “CIOs Must Take Stock! Simple Steps to Understand your Application Portfolio,” we give practical advice and best practices to achieve success.

You’re on Day One as a New CIO – Now What?!

Starting a new job is incredibly stressful as you’re thrust into a brand new environment with new people, new policies, and new systems that you’re unfamiliar with (not to mention checking out the snack situation and learning how to work the security card system — after you get locked out at least once). Your first few weeks as the “new guy” or “new lady” buys you some slack, but you are still expected to learn as much as possible, as quickly as possible so you can start contributing. Well take that situation and multiply it by about a thousand and that’s what it’s like for a new CIO.

The Evolution and Career Path of the CIO

There’s been a lot of debate in the news and social networks about what’s in store for future CIOs. Oddly, pundits are in on the act, attempting to define exactly what we mean by CIO. Regardless of the title, the fact is that CIOs live on the knife’s edge of innovation, and today, that blade has never been sharper.
I’ll be talking about this at length in a webinar, The Evolution and Career Path of The CIO, which I encourage you to attend. Today, I wanted to offer some insight into what I’ll be covering and the impact of the changing role of the CIO.
The CIOs of Old
In the past, CIOs were measured creatively. That is, if the software they developed was on time and on budget, they got a pat on the back. Now, because of the pervasiveness of technology, CIOs can be measured in terms of business impacts, customer insights, and enabling business growth through the optimization of people, process, technology, and data & analytics. Anyone looking to carve their path to the CIOs chair can’t afford to be reactive to the business strategy. They need to be proactive, strategic, and a spot-on culture fit.
The typical path to CIO usually started in application development and gradually worked toward the CIO role. But now, we’re seeing CIOs coming from finance, and even HR. That’s because technical skills are becoming less of a priority, as top-level management want the CIO to keep their teams productive and build relationships with customers.
CIOs of the Future
While the next generation of CIOs will still likely come from the world of application development, they will need to better understand how to measure themselves in terms of their business impact.
Because of this, the prioritization of skills and competencies for CIOs has shifted from focusing on deliverables and cost, to focusing on customer analytics and business growth. IT organizations now are not only managing risk and cost, but they’re managing the business outcomes. And they’re doing all of this with less budget and staff, making the tools and assets that manage risk and cost critical to the business.
Speed is also becoming a main driver for every IT organization, and CIOs are being asked to increase their speed to market, speed to solution, and speed to understanding their customers through an improved digital strategy.
The CIO will always be measured against two factors: time and budget. That will never change. But who would have guessed that simply adding “driving business outcomes” into the mix would radically alter what CIOs need to be prepared for?
It’s a catch-22 in some ways. You could hire a non-technical CIO that knows the business but might not deliver the meat and potatoes in terms of great products and services. Or you could grab a techy CIO whose products are innovative, but who lacks the business acumen to drive results. But regardless of your choice, your CIO is going to have big shoes to fill.
How do you see the role of the CIO evolving? Do you have your career path to the CIO mapped out? Sign up for my webinar, The Evolution and Career Path of the CIO, for the answers to these questions and more to help you start your roadmap to the CIO.

Gartner Webinar: Get Smart about Technical Debt

Over the past 10 years or so, it has been interesting to watch the metaphor of Technical Debt grow and evolve.  Like most topics or issues in software development, there aren’t many concepts or practices that are fully embraced by the industry without some debate or controversy.  Regardless of your personal thoughts on the topic, you must admit that the concept of Technical Debt seems to resonate strongly outside of development teams and has fueled the imagination of others to expound on the concept and include additional areas such as design debt or other metaphors.  There are now a spate of resources dedicated to the topic including the industry aggregation site: OnTechnicalDebt.com
We recently had David Norton, Research Director with Gartner Research as the guest speaker on a webinar, “Get Smart about Technical Debt”.  During the webinar, Mr. Norton was passionate about the topic and he believes that Technical Debt will transcend the casual use by architects (and marketers) to find a more permanent place among the vernacular of CIOs and CFOs.  Spend just a little time with Mr. Norton and it is clear that he is on a quest to drive the concept as an important indicator of risk, and the practice of measuring and monitoring Technical Debt will soon become a requirement as the industry continues to mature.
In my personal view, Technical Debt, although not perfect, is one of the few development metrics that rises above the techno-speak of the dev team.   Technical Debt seems to resonate because it attempts to quantify the uncertainty within a product or development process—an uncertainty that underlines all the years of process improvement initiatives, training classes, project management tools and overhead that is typically forced upon dev teams.
I believe that rather than fighting this metaphor, dev teams should embrace Technical Debt and work with the organization to create a common definition and method for calculating it.   It takes very little overhead to provide on-going measurement once a definition and method is determined.  To me, the value to the development organization is that we are now armed with a “hard cost” for poor or myopic decisions.
I really enjoyed Mr. Norton’s passion and the great discussions/questions from those that joined us on the webinar. If you missed it you can watch the recording here.  I’d be interested in your take on his view.
If you want to learn more about Technical Debt, there are some great resources listed below:

Java Application Architecture: Modularity Patterns with Examples Using OSGi (Agile Software Development Series) by Kirk Knoernschild
The Economics of Software Quality by Capers Jones and Olivier Bonsignour
Paying Down the Interest on Your Applications: A Guide to Measuring and Managing Technical Debt white paper from CAST
The CRASH Report – 2011/12 (CAST Report on Application Software Health)

How to Monetize Application Technical Debt research paper with Gartner and CAST
 

Patrolling for Issues in Legacy Apps

It’s not uncommon for organizations to hold onto their application software and IT systems longer than they should. This is particularly true for government agencies – Federal, state and local. When you combine an “if it ain’t broke, don’t fix it” mentality with budget cuts and comfort levels of staffers, there is little impetus for change.
Clifford Gronauer, CIO of the Missouri State Highway Patrol, discovered just such a system last year. Gronauer was charged with upgrading the patrol’s aging IT system. Upon vetting the scope of the project, he found an antiquated system of mainframe-based legacy applications that dated back to the 1970’s!
The project turned into what Gronauer termed a “perfect storm” of upgrades that forced him to alter his plans from upgrading the system piece-by-piece to doing a complete overhaul broken into larger phases. On the bright side, he stumbled upon a Federal grant that would pay for the project and, in the end, the task earned him recognition as a finalist for the 2011 MIT Sloan CIO Symposium Award for Innovation Leadership.
I can only imagine Gronauer’s reaction when he realized the enormity of the fix that was going to need to happen. It must have been something akin to the one Roy Scheider’s character had in the original “Jaws” film when he first laid eyes upon the monstrous great white – “We’re gonna need a bigger boat.”
Digging for Clues
Dealing with legacy applications is never fun; in fact, it probably leaves many CIOs scratching their heads and wondering why their predecessors never bothered to upgrade the system. Since there’s no way to retroactively upgrade the application software, they have no choice to move ahead and make the best of what is in existence.
This poses a significant problem, though. The average IT manager and most CIOs out there are around my age, and I was in grade school when the Missouri State Patrol’s old system was implemented. This means it’s highly unlikely that even the most senior members of the IT department will have had experience with the code used to write the legacy apps.
The problem this unfamiliarity goes beyond just trying to rewrite old code or untangle the system in order to transfer data. Equally complex, if not nearly impossible, is figuring out where the old mistakes were – if nobody knows what’s right, how would they know what’s wrong? This makes finding fixes for old issues problematic at best. Workarounds and just ignoring the issues, hoping they won’t pose a problem down the road, are the most frequent answers, but sidestepping the problem is akin to failing to interview eyewitnesses during a crime investigation. It’s the kind of action that results in the poor structural quality that results in future failures or even crimes crimes being committed (i.e., hacking due to unforeseen security vulnerabilities).
Identifying the Culprit
Since dumb luck is no way to establish a foundation for a new or upgraded IT system, a company building up from a system of legacy apps needs to analyze what it has fully and then continually assess the build as it is being done.
Manual analysis of any application software build is cumbersome, time consuming and highly inefficient – like finding a single needle in 4,000 haystacks. Multiply that difficulty by the fact that the person doing the manual analysis of the legacy app probably doesn’t even know what the needle looks like and the chances of finding the culprit become infinitely small.
On the other hand, an automated assessment platform can conduct an investigation of hundreds of thousands of lines of code quicker and with a far better understanding of what it is looking for. By automating the process of static analysis, companies can ferret out offending legacy code and give those responsible for the upgrade a solid structure upon which to build. And employing this same platform of automated analysis and measurement to conduct continual architectural and code component reviews to find any new issues that arise ensures what is being build atop the legacy application interacts properly with the existing code.
This level of attention to structural quality is crucial in the constant fight of the IT department to eliminate outages and security vulnerabilities. So crucial in fact, that failing to conduct automated assessment when building on top of a legacy application should certainly be considered a crime.