In my last post, I shared my opinion on the benefits of non-representative measures for some software risk mitigation use cases. But does that mean I am always better served by non-representative measures? Of course not. No bipolar disorder here, just a pragmatic approach to different use cases that are best handled with some adapted pieces of information.
Finding the right tools for the right challenge The growing cost of most software development efforts can be traced back to one underlying cause – the lack of visibility into the software. As the size and system complexity grows for business critical applications — along with the complexity of sourcing environments — there is an increasing need for app owners, architects, and developers to truly understand their codebases. Without visibility into the implementation, it is hard for a developer to understand all the nuances of the code. This explains the disproportional amount of time that is needed for developers to identify the root cause of defects.
No offense, but I’m not addicted to representative measures. In some areas, I am more than happy to have them. Like when talking about the balance of my checking and savings accounts. In that case, I’d like representative measures, to the nearest cent. But I don’t need representative measures 100 percent of the time. On the contrary, in some areas, I strongly need non-representative measures to provide me with some efficient guidance.
Dear Technology Colleagues at Derivatives Exchanges, I’m sure you do not need me to inform you that the investment community is becoming more aware of the importance of dependable software operating our exchanges. Yet many of your competitors have fallen victim to the reputational damage caused by software glitches. Many of us, as technology professionals and as individual investors, are shocked to see the escalating pace of major software outages reported by the exchanges and major market makers.
As the product manager for CAST Highlight, it’s refreshing to see a shift in discussions about the “quality of cloud solutions” to “cloud quality solutions.” Recently, there have been an increasing number of cloud-based static code quality analysis tools, or should I say services. A few that I’ve been watching include: Code Climate consolidates the results from a suite of Ruby static analysis tools into a real-time report, giving teams the information they need to identify hotspots, evaluate new approaches, and improve code quality.
Most organizations have started to realize that code quality is an important root cause to many of their issues, whether it’s incident levels or time to value. The growing complexity of development environments in IT — the outsourcing, the required velocity, the introduction of Agile — have all raised the issue about code quality, sometimes to an executive level. Business applications have always been complex. You can go back to the 70s, even the 60s, and hear about systems that have millions of lines of code. But here’s the rub: In those days it was millions of lines of COBOL or some other language. But it was all one language. … Read More
It’s no shocker that the federal government is turning to cost cutting measures in the middle of a down economy. But there’s a bigger problem looming on the horizon. The federal government has become very dependent on open source products; which wouldn’t be a problem if open source software was held to the same standard as custom commercial code.