About Philippe Emmanuel Douziech

Philippe-Emmanuel Douziech is a Product Manager by CAST since 2005, previously in charge of the CAST Dashboard, now responsible for the CAST Quality Model. He participated to the CISQ workshops from the start to define the factors affecting applications’ vulnerability, availability and responsiveness to the end users, as well as applications’ maintenance cost, effort and duration. Prior to CAST, he was Product Manager by ORSYP, in charge of the event-driven peer-to-peer job scheduler Dollar Universe. Prior to ORSYP, he worked on Inertial Confinement Fusion simulation and experiment. Philippe-Emmanuel has a master degree in engineering and executive sciences from MINES ParisTech.

See Through the Cloud!

It’s no question that Cloud is no longer a passing phase. In the span of a few years, Cloud has moved from an interesting concept to a useful business tool. What began as a creative tool for testing has moved into the mainstream as a way to improve hardware utilization and expand capacity. The benefits for Cloud are well established, and more customers are moving to consumption-based models, either with captive or public Cloud solutions. Many tools exist to help with Cloud migrations, but few have the flexibility to “see through the Cloud” to the application code, and make that code fit this new world.

SODA, Anyone?

It’s no surprise that organizations are moving more and more of their business critical systems to the cloud because of its availability, speed, and ease-of-use. But how does this effect and organizations ability to properly test and maintain the quality of those systems?
The best approach we’ve seen so far is Service-Oriented Development of Applications (SODA) which is the process of developing applications with Service Oriented Architecture (SOA) in mind. The idea is to create an overall business service that is able to adapt to business ever-changing requirements at the lowest cost yet with the shortest cycle.

Why are there so many hurdles to efficient SAM benchmarking?

Two opposite sides
When dealing with Software Analysis and Measurement benchmarking, people’s behavior generally falls in one of the following two categories:

“Let’s compare anything and draw conclusions without giving any thought about relevance and applicability”
“There is always something that differs and nothing can ever be compared”

As often, there is no sensible middle ground.

Representative vs. non-representative measures: Bipolar disorder?

In my last post, I shared my opinion on the benefits of non-representative measures for some software risk mitigation use cases. But does that mean I am always better served by non-representative measures? Of course not.
No bipolar disorder here, just a pragmatic approach to different use cases that are best handled with some adapted pieces of information.

Do I look like someone who needs representative measures?

No offense, but I’m not addicted to representative measures. In some areas, I am more than happy to have them. Like when talking about the balance of my checking and savings accounts. In that case, I’d like representative measures, to the nearest cent.
But I don’t need representative measures 100 percent of the time. On the contrary, in some areas, I strongly need non-representative measures to provide me with some efficient guidance.

Technical Debt: Principal but no interest?

Making technical debt visible …

Making technical debt visible already proves to be quite a challenge, as it’s all about exposing the underwater part of the iceberg.
But how deep underwater does it go? To know for sure, you would need the right diving equipment. To go just below the surface, you would start with a snorkel. But to go far down, you need a deep-sea exploration submersible.

There is code duplication detection and code duplication detection

Many software solutions feature the detection of duplicated source code. Indeed, this is one cornerstone of software analysis and measurement:
It is easy to understand the value of dealing with duplicated code: avoiding the propagation of bugs and evolutions in all copies of the faulty piece of code, promoting reuse, and avoiding an unnecessarily large code base (especially when maintenance outsourcing is billed by the line of code).
Now that everyone is convinced of the importance of such capabilities, lets dive deeper into how to do it. There are various solutions and not all are equal.
Can the difference be explained without looking at an algorithm or cryptic formulas? Let’s try.