For many CIOs, reporting on software risk is a complex problem. The reports are usually compiled once a quarter, and can take days if not weeks to complete. But worse than that, they often fail to deliver actionable insight to answer simple business questions. Which of my critical systems are most vulnerable? Are my IT vendors delivering as promised? How can we improve customer satisfaction? Are my development teams under-performing? How can we improve time-to-market for new projects?
Pay attention US financial sector, because the UK is one step ahead of you … sort of. They’re at least willing to admit they have a problem with software risk and IT system resiliency, which is on the path to recovery.
A recent report published by Tech Market View confirmed a 2012 warning by a director of the Prudential Regulatory Authority that the IT systems of UK banks were “antiquated.” and that he could not say with confidence that they are robust. The statements were delivered to a committee in Northern Ireland as they discussed the major IT failure at RBS/Ulster Bank in 2012 which affected the bank’s customers all over the world.
Agile software development is a streamlined, transparent process with speed built into each step. It’s so focused on speed, in fact, that developers call what they can successfully accomplish in a two week sprint their ‘velocity.’ But while Agile development teams do incorporate unit tests and the testing of functional aspects of their code, there is often little analysis of the structural quality above the module level. This is something that makes most architects in enterprise software organizations nervous about Agile.
The software architecture is one of the most important artifacts created in the lifecycle of an application. Architectural decisions directly impact the achievement of business goals, as well as functional and quality requirements. Yet once the architecture has been designed, most architectural descriptions are seldom verified or maintained over time. Architecture compliance checking is a sound application risk management strategy that can detect deviations between the intended architecture and the implemented architecture.
The ever-growing cost to maintain systems continues to crush IT organizations, robbing their ability to fund innovation while increasing risks across the organization. The cost of maintaining a software system is directly proportional to the size and complexity of the system. Therefore any effort to reduce the size and complexity translates into direct improvement of software maintenance costs. The following provides guidance on how a static code analysis of applications generates actionable insight you can take to immediately improve the maintainability of systems.
In the spirit of Yogi Berra, I’ve decided to list of the obvious things that I know in life: water is wet, the sky is blue, and big software projects fail.
I’m sure that you are aware of the very public failure of the centerpiece of Obamacare, Healthcare.gov, and by now have heard enough of the public interrogations of this project, the system, its agency, and policy.
Rather than adding to that, I’d caution that instead of staring too long and too closely at this incident, we should allow it to serve as a simple reminder that there are more and bigger failures lurking.
Eight years ago I organized the Workshop on Technical Debt at Calvin College, and I’ve stayed involved in the discussion since.
The concept, to me, seems simple, intuitive, and obvious: Technical short-cuts lead to a slight increase in value today at the expense of speed tomorrow.
Then Ron Jeffries, a co-author of the Agile Manifesto, got up to speak, along with his partner, Chet Hendrickson. Ron and Chet had served as part of the team that invented Extreme Programming in 1999.
What they had to say turned the workshop upside down.