It’s no surprise that organizations are moving more and more of their business critical systems to the cloud because of its availability, speed, and ease-of-use. But how does this effect and organizations ability to properly test and maintain the quality of those systems?
The best approach we’ve seen so far is Service-Oriented Development of Applications (SODA) which is the process of developing applications with Service Oriented Architecture (SOA) in mind. The idea is to create an overall business service that is able to adapt to business ever-changing requirements at the lowest cost yet with the shortest cycle.
Everything moves fast in the IT world. It is said that a year can be translated to seven years in a dog’s life. Well, the translation is much higher in IT. Every year, new computing languages are created. They might differ from the previous one by a comma, but they are created nonetheless. Should we adapt our conceptions of software quality to those new languages?
This Mark Twain quote comes back to me whenever I think about the central role that ERP platforms play in the innovation efforts of many organizations. How long has it been since we first started to hear that ERP was dead?
It was well over 10 years ago that the inflexibility of ERP systems led naysayers to predict that it would be phased out and replaced by more modular and adaptable applications that would be better equipped to support the many unique aspects of each enterprise. More recently, the cloud/SaaS/subscription model pundits began the charge again.
In my last post, I shared my opinion on the benefits of non-representative measures for some software risk mitigation use cases. But does that mean I am always better served by non-representative measures? Of course not.
No bipolar disorder here, just a pragmatic approach to different use cases that are best handled with some adapted pieces of information.
As the product manager for CAST Highlight, it’s refreshing to see a shift in discussions about the “quality of cloud solutions” to “cloud quality solutions.” Recently, there have been an increasing number of cloud-based static code quality analysis tools, or should I say services. A few that I’ve been watching include:
Code Climate consolidates the results from a suite of Ruby static analysis tools into a real-time report, giving teams the information they need to identify hotspots, evaluate new approaches, and improve code quality.
Most organizations have started to realize that code quality is an important root cause to many of their issues, whether it’s incident levels or time to value. The growing complexity of development environments in IT — the outsourcing, the required velocity, the introduction of Agile — have all raised the issue about code quality, sometimes to an executive level.
Business applications have always been complex. You can go back to the 70s, even the 60s, and hear about systems that have millions of lines of code. But here’s the rub: In those days it was millions of lines of COBOL or some other language. But it was all one language. All one system. All one single application in a nice, neat, tidy package.
Here we go again. You probably have heard, since it’s been reported everywhere, that American Airlines was grounded Tuesday, leaving passengers stranded for several hours due to a “computer glitch” in the reservation system. Because of the glitch, gate agents were unable to print boarding passes; and some passengers described being stuck for long stretches on planes on the runway unable to take off or, having landed, initially unable to move to a gate.