The recent spate of IT glitches and ‘power outages’ at British Airways which caused the UK’s national carrier to cancel all its flights worldwide at the start of May bank holiday along with the WannaCry ransomware attack which ground the National Health Service to a halt have exposed again the importance of IT systems in today’s business. The complexity of these IT systems, the number of vulnerabilities that exist in critical software used by critical infrastructure sectors such as the NHS, airlines, telecom operators has made headlines once more.
For many IT-intensive enterprises, the bloating cost of maintaining software applications may be the biggest elephant in the room. Software maintenance costs typically comprise up to 75% of the total cost of ownership of each application. With so much investment and energy dedicated to keeping the lights on, finding a way to better allocate IT resources — even just by a marginal amount — can have significant impact on the enterprise’s capacity to innovate.
CAST’s research into this area has uncovered some provocative findings. As we’ve discussed previously on the On Quality blog, the cost of maintaining a software application is directly proportional to its size and complexity. IT organizations can take several steps using static code quality analysis to reduce size and complexity, and thus diminish their software maintenance costs.
The current state of measuring the environmental impact of our IT infrastructure is missing a big piece of the puzzle. One of the metrics we use, power usage effectiveness (PUE), only looks at how much power entering a data center is being consumed by the computer hardware in relation to the total amount of energy the facility uses.
But what about the millions of lines of code running on that hardware? How can we know if that’s energy efficient code?
There has been a tectonic shift over the past two to three years with businesses realizing that analysis and measurement of critical business software is no longer simply nice to have, but a necessity. Every CIO, CEO, and board member is keenly aware of the fact that the stakes are too high and the size and complexity of mission critical systems has outpaced traditional technological safeguards.
In the midst of debt ceiling and government shutdown negotiations on Monday, the Obama Administration launched its new online health insurance marketplace — HealthCare.gov — where Americans can go to shop for affordable healthcare.
However, it seems even the federal government isn’t immune from technical snags.
It’s no shocker that the federal government is turning to cost cutting measures in the middle of a down economy. But there’s a bigger problem looming on the horizon.
The federal government has become very dependent on open source products; which wouldn’t be a problem if open source software was held to the same standard as custom commercial code.
Why our very own Lev Lesokhin, of course.
If you were on Twitter Tuesday (or watching the market index), you no doubt saw AP’s fake tweet regarding an explosion at the White House that wounded the president, and the market and media frenzy that followed as a result. Not only was it remarkable to see the effect one rogue tweet could have on market stability (it temporarily wiped out $136B from the S&P 500), but the whole episode also underscored how fast paced the world of business has become.