What Do Software Analytics and Your Doctor Have in Common?

As it turns out, plenty.
Recently, the U.S. government has implemented healthcare reimbursements based on the outcome of medical treatments, rather than a traditional fee-for-service approach.   These performance-based programs are designed to improve healthcare quality while lowering treatment cost.  It’s this outcomes-based approach that Fortune 500 companies are considering as a way of reducing ADM costs while improving software quality.

Digital Transformation Priorities: UK IT Leaders Weigh In

Is your IT landscape prepared for the ever-changing demands of digital transformation? A panel of the top IT experts in the United Kingdom joined CAST at the Institute of Directors (IoD) in London on Tuesday to discuss this complex, but increasingly pertinent question. The digital transformation event was attended by many IT professionals within the financial services, telecommunications, retail, government, and IT services industries.

Does an IDE improve software quality?

Modern Integrated development environments (IDEs) are equipped with more and more tools to help developers code faster and better. Among these are plug-ins that allow developers to scan the source code for error-prone constructs, dangerous or deprecated statements, or practices that should be avoided. IDEs come in a variety of flavors — both free and commercial — but in all cases, developers can install them to improve the quality of the code they produce.
Some organizations encourage their developers to explore and deploy such tools, but as any good app developer knows, there is a difference between installing an app and using it consistently.
Installing a tool is one thing, using it is another
Even if an organization deploys an IDE, the results really depend on how the developers utilize their new tool. They can use it frequently and optimally; they can use it sometimes, when they don’t have anything else to do; they can never use it; or they can use it without taking results into account.
Each developer will have his own reasons for not adopting a new tool, but these are a few that I have seen in my experience:

The management team forces him to use it but without explaining why and how it can improve the development team by decreasing the number of errors. As a consequence, the tool may be rejected by the developer who is under pressure by the business and decides he does not need to lose time playing with a new toy.
He installs some tools, just to try them out, but with no clear objective to improve the quality of the code he produces. The tool might be fun, but when the real work starts, it’s less exciting.
He is not concerned by software quality and considers results that are delivered as a source of additional work. In this case, he deactivates rules, either to reduce the number of violations or because he thinks they are not relevant.

The more you know
When a developer is convinced of the benefit, the analysis tool will be used regularly and the results will be studied and followed attentively. As a result, the application quality will improve, the list of problems will decrease, and the quality of the work will become finer.
Moreover, if the tool provides a developer with documentation explaining why a best practice should be respected, what problem exists behind a quality rule, and how to fix it, then it is going to enrich his knowledge and competency regarding the technology he is working with. This aspect will also contribute to decrease the number of risky situations in the code base.
I’ve experienced this situation several times, but it’s always surprising to hear a developer say he was not aware of some very specific or tricky behavior available in the technology he’d been using for a long time. It is never too late to learn!
Keep your team focused on system-level violations
Let me take a step back and say that an IDE is not a turn-key solution for perfect software quality. However, it does allow your developers to keep a closer eye on the quality of their code, which in turn decreases the number of violations detected when analyzing the whole application (and the number can be huge!). Now your development team is free to focus on architectural problems caused by components, layers, and other interacting technologies it might have missed during development. This will allow your individual developers to contribute to your business objectives as best they can.
Choice for developers: be active or not
Despite such great tools, to err is human, and problems found in the code will still fall in the lap of developers. They have a simple choice to make: either actively contribute to the quality of the app, or hope any problems fall on the desk of other developers. One way this choice could be influenced is by properly introducing and explaining software quality to development teams.

Remediation cost versus risk level: Two sides of the same coin?

While working in a CISQ technical work group to propose the “best” quality model that would efficiently provide visibility on application quality (mostly to ensure their reliance, performance, and security), we discussed two approaches that would output exposure. The first is a remediation cost approach, which measures the distance to the required internal quality level. The other is a risk level approach, which estimates the impact internal quality issues can have on the business.
Although both are based on the same raw data, the information differs when we identify situations that do not comply with some coding, structural, and architectural practices. The former approach will estimate the cost to fix the situations while the latter approach will estimate the risk the situations create.
The remediation cost approach
This approach has appeal because:

It is simple to understand: we are talking effort and cost. Anyone can understand that fixing this type of issue takes that amount of time and money
It is simple to aggregate: effort or time simply adds up
It is simple to compare: more or less effort or time for this application to meet the requirements
It is simple to translate into an IT budget

However, its major drawback is that it does not estimate the consequences. Using the technical debt metaphor, this approach only estimates the principal of the technical debt (that is, the amount you own) without estimating the interest payments (the consequences on your development and maintenance activity as well as on the service level of the indebted application). Why should we care? Because you will have to decide: Which part of the debt am I going to repay? Where do I start for a maximum return on investment?
A half-day fix can relate to a situation that can crash the whole application. For example, an unknown variable in a massively used library might be nothing to fix, while the consequences on the application behavior in production are severe. However, the remediation cost does not convey any the sense of urgency. If I were to monitor the progress of the project, a leftover half day would not scare me and force me to decide to fix it no matter what, even if it meant postponing the release date.
If the application did crash, would the answer, “Oh, we were just a half-day away from the required internal quality …” be acceptable? I think not. Something should have told me that despite the short distance to the required internal quality, the incurred risk was too high.
The risk level estimation approach
This approach has a different kind of appeal. Its proponents say that its models are what truly matter: the risk an application faces regarding its resilience, its performance level in case of an unexpected peak of workload, its ability to ensure data integrity and confidentiality, its level of responsiveness to business requirements, its ability to fit in agile development contexts, and to benefit from all sourcing options.
It puts the focus back on the fact that applications are here to serve the business and serve it well. Technical debt would not matter so much if it had no consequences on the business — It would remain a development organization issue and not a corporate issue.
There are some headlines in the news about late and over-budget projects in the IT sector. There are many more headlines in the mainstream news about major application blackouts and sensitive data leaks.
However, risk-level automation’s major drawback is its lack of pure objectivity. What is the business impact of a vulnerability to SQL injection? Nothing, until you find out. This isn’t so much of a problem in an internal application, but much more in a web-facing, mission-critical, data-sensitive application.
The two sides of the same coin?
Are these irreconcilable differences? Not so much if you think of the impact on the business as the interest-that-matters of the technical debt, while remediation cost are the principal sum of the technical debt.
What does “interest-that-matters” mean? It means “it depends,” of course. It depends on the value the application delivers to your organization. It depends on your application development and maintenance strategies. The context is key. The same principal amount of technical debt carries widely different interests in different contexts.
Why not use the same unit, that is, $ or €? First, the amounts could be too huge to serve any value to the business (outside a Monopoly board game). They are also too unpredictable — as the amounts are application dependent and, even for a given application, the consequences are also difficult to predict.
As for any other risk, this is more about giving a status: Is the risk level tolerable?
Many different statuses can be used:

Severe, high, elevated, guarded, or low
Unacceptable, poor, acceptable, good, or excellent
Very high / extreme, high, moderate, or low

These statuses convey the interpretation of the risk assessment. The output already takes into account the different aspects of risk: likelihood and consequences in context.

Now what?
If you are convinced, as I am, of the complementary nature of remediation cost and risk level, you would nonetheless point out that the major hurdle: objective risk level estimation.
Stay tuned for my next post, where we’ll look at this major hurdle to providing visibility into application quality.
How have you gotten visibility into your application’s quality? Share your story in a comment.

Getting SaaS-y about Technical Debt

I came across that old phrase, “Why buy the cow when you can get the milk for free?” the other day, in the context of marriage.  Why should people marry when they can just live together?  Well, you can imagine I came across a lot of opinions I won’t go into here.
An article by Naomi Bloom popped up, using the phrase in a technology context. She noted that vendors of traditional licensed/on-premise enterprise software had served themselves very well by forcing customers to buy both the apps as well as owning the data center and its operations, application management, upgrades, human resources, and more. This has provided traditional vendors considerable financial security and market power.
Clearly Defining the Cloud
Today’s multiple forms of cloud computing are changing all that, but we need to be careful of what passes for cloud computing, especially SaaS. Software marketers are rebranding all their products as SaaS, whether they really are or not, to take advantage of the latest ‘buzzword.’ Bloom notes that “true” SaaS must include four characteristics:

Software is made available to customers by the vendor on a subscription model;
The vendor hosts, operates, upgrades, manages and provides security for the software and all data;
The software architecture is multi-tenant, has a single code base and data structures, including metadata structures that are shared by all customers; and
The vendor pushes out new releases on a regular basis that are functionally rich and opt in.

Keep in mind that software can meet all these attributes and be “true” SaaS, but still be badly written, unprofitable, outdated or problematic in other ways. However, when well-architected, well-written and well-managed, true SaaS can provide many benefits, including improved economics, faster time-to-market, more frequent and lower-cost upgrades, greater value-added and/or new offerings, and improved agility.

SaaS Doesn’t Eliminate Technical Debt
One quality even true SaaS shares with traditional on-premise software is technical debt. Another benefit of the SaaS model not listed above is the continuous review of the software by multiple users, which can clue-in the vendor to issues with the code that impact performance.
There’s also a new generation of cloud-based portfolio risk analysis solutions that quantify the size and structural quality of applications, evaluate technical debt and offer information and insights that support investment decision-making. These solutions can provide continuous monitoring of the SaaS solution as well as risk and application analysis. Then, the vendor can implement a risk-based APM strategy faster, enabling better and safer decisions, portfolio modernization and transformation. It also profiles structural quality risk, complexity and size of any application to identify unexpected risks. Finally, it quantifies technical debt to proactively prevent software cost and risks from spiraling out of control.
If users think they are going to eliminate technical debt by moving to a SaaS model, their thinking is cloudy. But there are solutions to identify and help address technical debt for SaaS architectures that are just as robust as their on-premise counterparts.