Static code analysis is used more and more frequently to improve application software quality. Management and development teams put specific processes in place to scan the source code (automatically or not) and control the architecture of the applications they are in charge of. Multiple analyzers are deployed to parse the files that are involved in application implementation and configuration, and they generate results like lists of violations, ranking indexes, quality grades, and health factors.
Based on the information that is presented in dedicated tools like dashboards or code viewers, managers and team leaders can then decide which problems must be solved and the way the work has to be done. At the end of the analysis chain, action plans can be defined and shared with development teams to fix the problems. This is the first aspect of static analysis regarding application software quality, but this must not be the only one.
But what about the importance of understanding the problem?
The second aspect concerns the developer himself, as he is strongly impacted by quality improvement. Obviously he can use the results to fix any potential issues that have been identified, but he must understand the problem behind a violation and, above all, the remediation that must be applied to fix it.
Documentation, like coding guidelines, must be available and an analysis platform, like CAST AIP for instance, can provide him with a clear description of the rules that are violated. The documentation presents explanations about the problem and the associated remediation, with meaningful examples to illustrate how the violation is identified and how a possible remediation can be implemented. With that, the developer will be more confident regarding the tool, and he will spend less time finding the exact piece of code that has been pinpointed, and replace it with a better one in less time.
However, this aspect also concerns team leaders and mid-management. As I said above, they have to define action plans by taking into account the severity of problems and the bandwidth they have regarding development activities. The documentation is also important here, since it is going to help management understand the problems, their impact on the business, and the cost to fix them. Defining action plans is often a challenge for people who are not very technical. How do you select which violations need to be fixed first with limited resources?
Involving the actors
This second aspect is very important because it contributes to improving the developer’s knowledge of the technology and its impact on the software quality, which leads to decreased violations. I have already experienced developers in some shops who were not aware of tricky programming language behavior.
Developers wrote code with risky constructs without knowing the resulting behavior and induced defects in the software. This is especially true with complex programming languages, like C++ or framework-based environments like JEE. It’s also true when developers are newcomers with basic training and low experience. Thus, documentation with clear rationale, remediation, and with good examples that can be reused easily can be considered as a complement to training. As I said in a previous post related to the developer’s choice, it is never too late to improve our knowledge and to avoid making the same mistake several times. Moreover, it is very frustrating to receive a list of violations (sometimes coming with unpleasant remarks!) without knowing what exactly the problem is and how to fix it. You get a notification saying there is a violation here, and so what? Frustration and misunderstanding often prevent involvement.
In addition to that, static code analysis could be an interesting way to make developers more involved in terms of software quality by producing better implementation. Also, it’s a good idea to search for similar cases that developers are aware of that may not have been detected by the static analyzers.
Finally, software quality improvement is not only a matter of tools searching for violations. It is also a matter of understanding and knowledge for all the actors who are involved in application development and maintenance. This is why the quality rule documentation, the coding guidelines, the architecture description, and easy access to these sources of information are very important and should be taken into consideration in software quality projects.
Modern Integrated development environments (IDEs) are equipped with more and more tools to help developers code faster and better. Among these are plug-ins that allow developers to scan the source code for error-prone constructs, dangerous or deprecated statements, or practices that should be avoided. IDEs come in a variety of flavors — both free and commercial — but in all cases, developers can install them to improve the quality of the code they produce.
Some organizations encourage their developers to explore and deploy such tools, but as any good app developer knows, there is a difference between installing an app and using it consistently.
Installing a tool is one thing, using it is another
Even if an organization deploys an IDE, the results really depend on how the developers utilize their new tool. They can use it frequently and optimally; they can use it sometimes, when they don’t have anything else to do; they can never use it; or they can use it without taking results into account.
Each developer will have his own reasons for not adopting a new tool, but these are a few that I have seen in my experience:
The management team forces him to use it but without explaining why and how it can improve the development team by decreasing the number of errors. As a consequence, the tool may be rejected by the developer who is under pressure by the business and decides he does not need to lose time playing with a new toy.
He installs some tools, just to try them out, but with no clear objective to improve the quality of the code he produces. The tool might be fun, but when the real work starts, it’s less exciting.
He is not concerned by software quality and considers results that are delivered as a source of additional work. In this case, he deactivates rules, either to reduce the number of violations or because he thinks they are not relevant.
The more you know
When a developer is convinced of the benefit, the analysis tool will be used regularly and the results will be studied and followed attentively. As a result, the application quality will improve, the list of problems will decrease, and the quality of the work will become finer.
Moreover, if the tool provides a developer with documentation explaining why a best practice should be respected, what problem exists behind a quality rule, and how to fix it, then it is going to enrich his knowledge and competency regarding the technology he is working with. This aspect will also contribute to decrease the number of risky situations in the code base.
I’ve experienced this situation several times, but it’s always surprising to hear a developer say he was not aware of some very specific or tricky behavior available in the technology he’d been using for a long time. It is never too late to learn!
Keep your team focused on system-level violations
Let me take a step back and say that an IDE is not a turn-key solution for perfect software quality. However, it does allow your developers to keep a closer eye on the quality of their code, which in turn decreases the number of violations detected when analyzing the whole application (and the number can be huge!). Now your development team is free to focus on architectural problems caused by components, layers, and other interacting technologies it might have missed during development. This will allow your individual developers to contribute to your business objectives as best they can.
Choice for developers: be active or not
Despite such great tools, to err is human, and problems found in the code will still fall in the lap of developers. They have a simple choice to make: either actively contribute to the quality of the app, or hope any problems fall on the desk of other developers. One way this choice could be influenced is by properly introducing and explaining software quality to development teams.
The economy, the complexity and pace of business, and an ongoing lack of resources have created a perfect storm for IT departments worldwide. As wave after wave of IT failures litter the press, there’s no question that the storm is here. In its wake, businesses are faltering, careers are shattering, and stockholders are left wondering “How could this happen … again?”
The key to preventing your business and career from landing on the rocks is the aggressive identification and elimination of risk. This document provides some tactics designed to identify risks across vast application portfolios and eliminate risk within critical business systems.
Red sky at morning, sailor take warning
With years of experience in exposing risks in IT systems, CAST provides a suite of offerings that yield the insight necessary to identify what can lead to high-profile production failures and cyberattacks. We also provide the remediation plans to eliminate the root cause of these issues.
Rapid Portfolio Analysis (RPA) creates transparency into vast application portfolios to identify risk. RPA derives measurements such as production failure potential and software complexity and maintainability. It also profiles portfolios to highlight short-term and long-term risks in critical systems. CAST’s Application Intelligence Platform (AIP) provides a robust DNA-level analysis of individual enterprise systems with specific guidance for eliminating business risk caused by structural and technical quality issues. It does this either on applications you know are in trouble, or by using the insight delivered by RPA to create a prioritized list of applications to unleash AIP.
Highlight Latent System Risks
Rapid Portfolio Analysis (RPA) creates technical and business risk profiles based on automated analysis and insight to support application portfolio analysis, portfolio rationalization, or technical assessments.
RPA analyzes source code against a set of engineering rules and principles to identify potential production defects, maintenance or modification issues, and excessive complexity. These are some issues that contribute to potential failures and real business risks. RPA rounds out this assessment by generating software maintenance estimates of applications, as well as estimates of the technical debt.
A Dive Deep in to Critical Systems
Using highly sophisticated code analyzers and more than 1,000 rules based on engineering principles, AIP dives deep into an application from the largest modules down to individual methods, classes, and components. AIP analyzes and semantically understands source code, scripting, and interface languages across all layers of an application.
The resulting analysis identifies quality lapses in an application’s source code, and provides precise guidance on how to fix the problems. Additionally, AIP validates architecture, ensures adherence to frameworks, and automates sizing such as function points. Doing so provides a robust view of the systems size, complexity, and quality.
Measure the Business Impact of Quality
Through extensive research and industrial experience, CAST has identified five areas of structural software quality that most impact business risks and outcomes. Each of these five areas can be assessed by measuring numerous attributes of the software that can summarize structural software quality at a level that can be related to business value.
Navigate Risk with Actionable Insight Identifying the root cause of system risk is only the first step. CAST assessment with AIP not only identifies these issues, it provides rationale as to which violations have been recorded and which are most critical. IT also creates action plans that lead technical teams in the remediation effort.
You Can’t Stay in the Harbor to Wait Out the Storm
As a leader, you need to ensure your team has the time and resources needed to root out and eliminate risks that can potentially damage the business. The key to effectively managing risks in an application portfolio is early detection of issues and the ability to quickly mitigate them.
With the detailed information provided by AIP in hand, application development executives and business leaders can map out and monitor aggressive remediation efforts that drive out system-level risk, resulting in more resilient and reliable applications.
Whether you need a macro-view of your portfolio risks or a micro-view of a specific application, CAST’s suite of assessment solutions can help. Contact your CAST representative now to learn how we can create the visibility needed to navigate through these troubled waters.
Applications are built on thousands, millions, maybe even tens of millions, lines of code. They are based on specific architecture gathering technologies, frameworks, and databases set up with their own specific architecture. If you have an action plan to improve your application on a specific issue, what will be your strategy? Do you select one problem related to quality or take the opportunity to refactor part of your application? You know about issues coming from end users, but how do you address those inside the structure of your application?
I remember meeting with development teams and management who were trying to find the root cause of performance issues, as delays or malfunctioning of the application would severely impact the business. The old application was a mix of new and old technologies, and there was difficulty integrating the new user interface with the rest of the application.
They discussed for hours to decide which orientation they were going to take, and pressure was high. The team responsible for the user interface said that the old platform had to be rewritten, and of course the rest of the team complained that their part of the application worked well before the new interface appeared and it was the database structure that was causing performance issues. Management was totally lost, and didn’t know what to decide! And while this was going on, we could sense the business value decreasing.
We decided to analyze the entire application with CAST; using the transaction ranking method to identify issues regarding performance. The transactions are ranked using the sum of the high risks attached to each, along with the multiple methods and procedure calls.
In the CAST Dashboard, there is a view dedicated to list transactions from the end-user point of view. The transactions that appeared at the top of the list were precisely the transactions which had real performance issues. Then, it becomes easy to select the most critical defects inside one specific transaction and add them to your action plan.
These results, coming from a solution like CAST, were factual and not debatable. It highlighted the fact that defects correlated to performance issues were a combination of bad programming practices coming from different parts of the application.
We decided to work only on the transaction with the highest risk to measure improvement in terms of performance in production. In the end, all teams worked together because the root causes were links between loops bad practices, an improperly managed homemade framework datalayer, and a huge amount of data contained by tables without proper indexes.
This is just one successful way to build an effective action plan. What’s your experience? Do you have a success story about how you built an effective action plan? Be sure to share in a comment.
Risk detection is the most valid justification to the Software Analysis and Measurement activity: identify any threat that can negatively and severely impact the behavior of applications in operations as well as the application maintenance and development activity.
“Most valid justification” sounds great, but it’s also quite difficult to manage. Few organizations keep track of software issues that originate from the software source code and architecture so that it is difficult to define objective target requirements that could support a “zero defects” approach. Without clear requirements, it is the best way to invest one’s time and resources in the wrong place: removing too few or too much non-compliant situation in the software source code and architecture, or in the wrong part of the application.
One answer is to benchmark analysis and measurement results so as to build a predictive model. This application is likely to be OK in operations for this kind of business because all these similar applications show the same results.
On the one hand, by nature, benchmarking imposes to compare apples with apples and oranges with oranges. In other words, measurement needs to be applicable to benchmarked applications — stability over time — so as to get a fair and valid benchmarking outcome.
On the other hand, risk detection for any given project:
benefits from the use of state-of-the-art “weapons”, i.e., the use of any means to identify serious threat, that should be kept up-to-date every day (as for software virus list)
should not care about fair comparison. It’s never a good excuse to say that the trading applications failed but that it showed better results than average
should heed contextual information about the application to better identify threats (an acquaintance of mine — a security guru — once said to me there are two types of software metrics: generic metrics and useful ones), i.e., the use of information that cannot be automatically found in the source code and architecture but that would turn a non-compliant situation into a major threat. For instance: In which part of the application is it located? Which amount of data is stored in the accessed database tables — in production, not only in the development and testing environment? What is the functional purpose of this transaction? What is the officially vetted input validation component?
Is this ground for a divorce on account of irreconcilable differences?
Are we bound to keep the activities apart with a state-of-the-art risk detection system and a common-denominator benchmarking capability?
That would be a huge mistake as management and project teams would use different indicators and draw different conclusions. Worst case scenario: Project teams identify a major threat they need resource to fix but management indicators tell the opposite so that management deny the request).
Although not so simple, there are steps that can be taken to bridge the gap.
It would be to make sure:
that “contextual information” collection is part of the analysis and measurement process
that a lack of such information would show (using the officially-vetted input validation component example, not knowing which component issues are a problem that would impact the results; not an excuse for poor results which much too often the case
that the quality of the information is also assessed by human auditing
Are your risk detection and benchmarking butting heads ? Let us know in a comment. And keep your eyes on the blog for my next post about the benefits of a well-designed assesment model.
Any advocate for better software quality knows that one of the biggest challenges is helping the CIO reach the CFO. When your team needs a budget for an important project, those conversations often break down. Thanks to the unavoidable technical complexity of IT, oftentimes the CIO might as well be speaking Esperanto to the CFO.
When it comes to budgeting, IT might be the least-understood department in your organization. And what the CFO doesn’t understand, he doesn’t budget for. Instead, capital that should rightfully go towards IT growth and innovation is allocated to other groups and initiatives. That dulls the organization’s competitive edge, and can have a toxic effect on system quality overall.
This is why I advocate software estimation as a budget-winning process for IT leaders. It clearly correlates software quality and technical debt in ways that a CFO or CEO can understand. “Technical debt” is a useful term that helps people outside of IT understand that application risk can be measured, and has a cost that gets paid for one way or the other.
The difficult part comes in where the rubber meets the road. Your CIO has intimate knowledge of the inner-workings of your IT department; you just need to equip him with the proper metrics to interface with the CFO.
Rather than getting technical, the CIO must decode what the IT teams do and translate it into the language of planning and budgeting — with a focus on being responsive, adding new capabilities, and reducing maintenance costs and risk per head. This is one place where our technology can help — with metrics like:
Software maintenance effort over time. This metric tracks the estimated software maintenance effort of your most critical applications broken down over fiscal quarters. It gives you an immediately identifiable visual into which applications require the most upkeep, and which are actually becoming more efficient.
Change in risk and size over the last four quarters. This report shows how many applications increased or reduced their risk to the organization; and also shows which applications increased or decreased in size. A great way to tell if your bloated applications are becoming a risk to your structural quality.
Estimated vs. planned maintenance effort. This is another great metric which compares the planned maintenance per application against the estimated effort. The application size, number and type of technologies, complexity, and quality are all drivers of the estimated maintenance effort.
Top 10 applications by high risk technical debt. This might be the most telling metric to bring to your CFO. This report shows the proportion of technical debt in your application portfolio that’s driven by dangerous coding patterns and should be addressed first to minimize business risk exposure.
With all the dimensional views an organization can get through our Application Intelligence Platform and Highlight reports, they can boil down high bandwidth conversations to a place where finance and IT can intersect. And armed with those key KPIs, your CIO will have rock solid metrics — in the CFO’s language — that can foster a dialogue both can understand.
Happy Independence Day everybody! I only hope those of you reading this on your Android device have not turned it sideways or performed some other seemingly innocuous action that has made this application fail.
I say this because I recently read yet another blog about “workarounds” to compensate for application failures inherent in Android devices. These pieces have become almost ubiquitous over the past 18 months to the point where one would think Google would just go back and perform the structural quality analysis it needs to do to address the issues.
Their failure to do so reminds me on this day before Independence Day of the opening lines of Thomas Paine’s “Common Sense”:
These are the times that try men’s souls: The summer soldier and the sunshine patriot will, in this crisis, shrink from the service of his country; but he that stands by it now, deserves the love and thanks of man and woman.
As Google continues to “shrink” from its responsibility to provide application software that is of sound structural quality, they are certainly “trying men’s [and women’s] souls.”
I Have Not Yet Begun to Fight! I continue to be amazed that Google appears more interested in what to call their next Android OS. As “enamored” (can you feel the sarcasm dripping from that word?) as I was last year with “Ice Cream,” I am even more captivated by their latest one – Jelly Bean. I am betting he name really fits the product – looks solid on the outside, but if Android’s history is any indication it will most certainly be a piece of gelatinous mush on the inside.
Maybe Google continues to fall into the trap of believing its own press clippings – the positive ones, at least – because they seem more concerned with marketing than they are with software quality. Google’s mobile operating systems continue to feature one flaw after another with these flaws not being “discovered” until after the system has been rolled out and installed by the consumer. And these flaws are not just minor ones that inconvenience the user like the ones mentioned in the work arounds blog to which I referred above. They include battery-draining and security flaws that cost time and money for those using the devices..
Nevertheless, they continue to build one iteration after another atop mobile platforms they know to have flaws – or at least by now they should know – and they continue to fail to fix them.
We Find these Truths to be Self-Evident
It’s truly a shame Google won’t use the same methodology as Thomas Jefferson and the Continental Congress did in forging one of the world’s greatest documents – the Declaration of Independence. From the time Jefferson began working on the Declaration of Independence on June 11, 1776, he wrote and rewrote, edited and re-edited versions of the document for almost three weeks until he came to what he felt was a product of optimal quality. And yet even after the document with all those versions and all those edits was presented to the Continental Congress on June 28, 1776, for a vote, those men debated for another five days over the contents of the document and made another 33 changes to it!
Obviously there was no Marketing Department pushing Congress to get out the final product. The Declaration of Independence – a document that truly did have urgency behind it as men were fighting and dying for the values it espoused – was edited and changed many dozen times before it was delivered. If that’s the case, why can’t the marketing people at Google allow their developers to perform a bit of automated analysis and measurement on Android software before they declare its “independence” from internal production? Were they to do this (harkening Paine again), they would “deserve the love and thanks of man and woman.”
These truly are the times that try our souls.