SODA, Anyone?

It’s no surprise that organizations are moving more and more of their business critical systems to the cloud because of its availability, speed, and ease-of-use. But how does this effect and organizations ability to properly test and maintain the quality of those systems?

CAST-SODA-anyoneThe best approach we’ve seen so far is Service-Oriented Development of Applications (SODA) which is the process of developing applications with Service Oriented Architecture (SOA) in mind. The idea is to create an overall business service that is able to adapt to business ever-changing requirements at the lowest cost yet with the shortest cycle.

SODA: Challenges and Benefits

Despite SODA’s apparent simplicity — “wrap every legacy component into Web Services and it’s SOA-enabled” — it requires even more development skills and control. The skills used in designing and deploying reusable components in traditional languages and tools are all-the-more applicable to SODA.

Yet, wrapping components with web services is generally not enough. Indeed, legacy and packaged applications and databases were designed for traditional business transactional processing, so their reuse through web services can require a fair amount of redesign.

And SODA even increases complexity with further abstraction of underlying technology, making dependency analysis even more difficult to perform as well as creating a new challenge to track all the much-smaller and much-more-numerous components.

However, SODA presents a great opportunity for increased quality of all applications. Because if one develops high-quality software components, they will be reused in multiple contexts and therefore bring along their intrinsic quality. The opportunity turns into real strength if one is able to ensure that its components are high-quality ones. Otherwise, it turns into a major weakness as the poor-quality components will automatically bring their frailties to the applications they participate in.

Analyzing Multi-Tiered Systems

The level of unpredictability is related to the openness of the exposed service as well as its success as a reusable component. This means extra-care has to be taken regarding its:

  • Reliability: it must operate as expected, both in term of accuracy and in term of up-time,
  • Security: it must protect data integrity and confidentiality, despite the new ways “in” the application and the new ways to use each feature,
  • Performance: it must be able to cope with unexpected and unpredictable workload

Compared to traditional development contexts, the need to ensure the application development quality is even more critical. The addition of multiple layers to a system leads to increased service downtime: if each layer is reliable 80% of the time, a three-layer system would only be reliable ~50% of the time (80% x 80% x 80%). The need for higher quality layers is critical. With multiple layers, one cannot be satisfied with a fair quality level. A three-layer system that must be reliable 80% of the time requires that each layer be reliable 93% of the time.

The recommended approach to face this challenge is to employ a full life-cycle defect removal model. This model includes source code and architecture inspection for defect tracking from the early stages of the application life cycle and takes into account the entire source code package. Functional and dynamic testing are more unlikely than ever to cover all the operating use cases.

Being able to understand the actual orchestration patterns is also key to unraveling architectural inconsistencies or missed opportunities. For example, when multiple elementary services access the same or similar resources, this can be an opportunity to create a new service that will handle the whole interaction — avoiding multiple elementary service invocation and removing functional, and therefore technical, redundancy.

I will talk more about how you can eliminate redundancy and ensure quality service-oriented applications in my next blog post, so stay tuned to this space.

Philippe Emmanuel Douziech

Philippe-Emmanuel Douziech is a Product Manager by CAST since 2005, previously in charge of the CAST Dashboard, now responsible for the CAST Quality Model. He participated to the CISQ workshops from the start to define the factors affecting applications’ vulnerability, availability and responsiveness to the end users, as well as applications’ maintenance cost, effort and duration. Prior to CAST, he was Product Manager by ORSYP, in charge of the event-driven peer-to-peer job scheduler Dollar Universe. Prior to ORSYP, he worked on Inertial Confinement Fusion simulation and experiment. Philippe-Emmanuel has a master degree in engineering and executive sciences from MINES ParisTech.

More Posts

Get Your Free White Paper And Learn How Software Analysis Can Help Your Business

Learn why you need to build security into your applications and how it will help improve and protect your business. Click the button below to get our FREE copy today.

Your Information will be kept private and secure.