Spring is in the air, and that can only mean one thing: Application risk analytics! Not exactly what you were expecting? While neither are those pesky architectural glitches that are slowing down your software development and thrusting your CIO and software teams into the limelight.
Join us at the Art Directors Club Wednesday, April 23, in the heart of Manhattan and let’s raise a glass for insight over ignorance, lucidity over obscurity, light over darkness, order over chaos, and of course warmth over frost. You can register for the event here.
On April 2, the IT industry was rocked when it was announced that over 60 percent of the Internet — even secure SSL connections — were vulnerable to attack due to a new weakness codenamed Heartbleed. The weakness lives in the OpenSSL cryptographic software library, which encrypts sessions between consumer devices and websites. It’s usually referred to as the “heartbeat” since it pings messages back and forth. Hence the name of the bug.
This is a critical vulnerability that is already testing the contingency plans of thousands of Linux vendors, as well as hosting companies.
The current state of outsourced application development is a sorry state of affairs because of myriad software quality issues causing unprecedented glitches and crashes. It’s not that all outsourcers are making terrible software, rather, it’s that governments and organizations have no way of accurately measuring the performance, robustness, security, risk, and structural quality of the applications once they’ve been handed the keys.
That’s why CISQ, leader in setting the standard for software measurement in the enterprise, will be hosting a seminar this Wednesday, March 26 about software quality for federal acquisitions.
When applications crash due to a code quality issues, the common question is, “How could those experts have missed that?” The problem is, most people imagine software development as a room full of developers, keyboards clacking away with green, Matrix-esque code filling up the screen as they try and perfect the newest ground-breaking feature. However, in reality most of the work developers actually do is maintenance work fixing the bugs found in the production code to ensure a higher level of code quality.
Not only does this severely reduce the amount of business value IT can bring to the table, it also exponentially increases the cost in developing and maintaining quality applications. And even though the IT industry has seen this rise in cost happening for years, they’ve done little to stem the rising tide. The time has come to draw a line in the sand.
Reducing software risk is at the top of every CIOs’ agenda this year — just like it was last year, and the year before that. And like the old saying goes, “Those who cannot remember the past are condemned to repeat it.” If CIOs are trying to reduce their software risk the same way they did in 2013, they’re setting themselves up for another year of crashes, outages, and angry customers.
So to help you remember the sordid past you don’t want to repeat, we compiled an infographic of the costliest software disasters in 2013. Hopefully seeing all these catastrophes and their associated costs in one place will help you prioritize fixing the riskiest applications in your portfolio. If not, it will at least be a reminder of how much a relatively minor glitch can cost an organization.
For many CIOs, reporting on software risk is a complex problem. The reports are usually compiled once a quarter, and can take days if not weeks to complete. But worse than that, they often fail to deliver actionable insight to answer simple business questions. Which of my critical systems are most vulnerable? Are my IT vendors delivering as promised? How can we improve customer satisfaction? Are my development teams under-performing? How can we improve time-to-market for new projects?
Pay attention US financial sector, because the UK is one step ahead of you … sort of. They’re at least willing to admit they have a problem with software risk and IT system resiliency, which is on the path to recovery.
A recent report published by Tech Market View confirmed a 2012 warning by a director of the Prudential Regulatory Authority that the IT systems of UK banks were “antiquated.” and that he could not say with confidence that they are robust. The statements were delivered to a committee in Northern Ireland as they discussed the major IT failure at RBS/Ulster Bank in 2012 which affected the bank’s customers all over the world.