SODA, Anyone?

It’s no surprise that organizations are moving more and more of their business critical systems to the cloud because of its availability, speed, and ease-of-use. But how does this effect and organizations ability to properly test and maintain the quality of those systems?
The best approach we’ve seen so far is Service-Oriented Development of Applications (SODA) which is the process of developing applications with Service Oriented Architecture (SOA) in mind. The idea is to create an overall business service that is able to adapt to business ever-changing requirements at the lowest cost yet with the shortest cycle.

When the software fails, first blame the hardware

We’ve made it a point on our blog to highlight the fact that software glitches in important IT systems — like NatWest and Google Drive — can no longer be “the cost of doing business” in this day and age. Interestingly, we’re starting to see another concerning trend: more and more crashes blamed on faulty hardware or network problems, while the software itself is ignored. It’s funny that the difference in incidents can be more than 10 times between applications with similar functional characteristics. Is it possible that the robustness of the software inside the applications has something to do with apparent hardware failures? I think I see a frustrated data center operator reading this and nodding violently.

Got SOA? Get (Automated) Quality Metrics!

Four Reasons for measuring application quality for SOA Development, Rollout, and Governance
1. Stop critical single points of failure. A failure in a single SOA component can have an impact on multiple applications. Hence, quality is critical to SOA components.
a. Automated quality measures (like the ones CAST produces) compare applications and components against industry best practices to identify quality lapses in your SOA repositories,
b. And provide actionable guidance on how to improve the quality of your SOA components.
Question: Do you know how to reliably detect and stop critical single points of failure?
2. Manage application performance across the multiple technologies and tiers spanned by your SOA systems. SOA repositories are typically built on a wide range of technologies. Performance problems are hardest to detect at the interfaces between tiers and technologies. Make sure that you have a quality measurement system that covers most technologies and platforms to highlight the quality of components across different layers and technologies.
Question: How does your data and application logic behave when crossing technology tiers?
3. Give your architects and developers practical guidance. Governance is the key to successful SOA implementation. However, governance is often an “ivory-tower” activity with very little practical guidance for day to day operational decisions. Strict conformity to SOA component development guidelines is key to a successful implementation of SOA. Deploy a system that automates the enforcement of these guidelines.
Question: How can architectural rules be written to provide practical guidance for developers?
4. Make service reuse a reality. The promise of service reuse, and the quantum leap in productivity it can generate, is quickly ruined by unprincipled service explosion. The result is a spaghetti bowl of services with many different variants of the same service, and developer confusion about what to reuse.
a. Automatically generate detailed information on service inputs and outputs, giving developers and architects a practical way to see the degree of similarity between any two services
b. Developers and architects can use this practical information to drive decisions around service modification, consolidation, or elimination.
c. Obtain automated visibility into service attributes makes service reuse a reality.
Question: How do you keep the explosion of services in check?