About Jerome Chiampi

Jerome Chiampi has been a Product Manager at CAST since 2006, previously in charge of SAP and DB2 support, now in charge of COBOL and C++ support, and responsible for analysis customization tools. Prior to CAST, he was Product Manager at SoftMaint, in charge of the ESSOR software analysis platform, and Consultant in charge of System migration tools. Jerome has a master degree in "Automatique industrielle et humaine" from L.A.I.H. University of Valenciennes, France

To each task its tool

Measuring application quality to get useful results requires proper analysis of the right source code perimeter in the most relevant way. But it doesn’t stop with just one measurement, you can follow the evolution of indicators for a given period in order to anticipate potential troubles and to be in a position to make good decisions.

Keep an eye on legacy apps, COBOL’s not dead!

Third-generation programming languages (3GL) like COBOL or PL/1 are seen as outdated languages for “has-been” developers, and do not interest new ones anymore (there were even predictions saying that COBOL was going to die in mid-term.) These new developers prefer more modern technologies, like J2EE or .NET, and, worryingly, educational organizations provide few learning opportunities for 3GLs.

False positives in SAM — Achilles’ heel or Samson’s hair?

False positives are unavoidable and appear in every software application measurement system, with more or less importance. There are several causes to that situation.

First, the more we search for information, the higher the risk of false positives.
Second, the more complex the information is to search, the higher the risk of errors.
And third, the less sophisticated the technique used to scan the code, the higher the risk of having bad results.

In this last case, the different techniques commonly used varied from a simple grep search to syntax-based parsing, semantic resolution, and dataflow analysis.
However, the situation can be seen following two opposite points of view: a negative one, considering that false positives are the Achilles’ heel of SAM; and a more positive one, like Sampson’s hair, that considers false positives valuable information.
The Achilles’ heel
The false positives generated when analyzing software applications impact measurement results, making the risk evaluation increasingly difficult. We cannot be sure the violations we are looking at are true or false, even if we have an idea of the results for a given rule in advance. It is not possible to know where false positives can be and this is annoying for people who have to use them.
Their occurrence depends on the measurement itself, but also on the nature of the application that is measured. I have experienced this situation multiple times since I starting to work on SAM in 1990, and it is very irritating to have doubt surrounding your results. I often prefer to check and double-check a large number of cases, to be sure. But is it possible to be really sure? I think it’s not. We can only refine our analysis techniques more and more.
Usually, I have an idea about the number of violations a quality rule should generate and if results are too numerous, then I’m pretty sure there are false positives. The problem here is that false positives lead to wasted time checking results, and decreased confidence in the measurement system. Moreover, when the analysis engine is improved and the false positives are removed, it is possible users continue to see violations as such and discard them without paying attention.
False positives also disturb an application’s benchmarking. When doing this type of exercise, it is a good thing to take into account the error rate. Effectively, the comparison can become inconsistent if the number of false positives is too high. In this case, how do you know if the results position an application correctly compared to others?
Applications with different characteristics can generate a different number of false positives. If an application has been implemented by using a programming construct generating false positives, then the total number of false violations for this application will be abnormally higher than for other applications. In the end, the comparison will be biased.
Samson’s hair
On the other hand, let’s be optimistic! Some measures are more complex than others and require an expensive effort, and having false positives in the results is better than having no result at all. SAM systems have their strengths as well as their weaknesses!
Effectively, even if the measure is not perfect, it allows you to know if the situation is rather good or rather dramatic, and which components are impacted or not. Moreover, we are at least aware that the measure is not so easy to take and the results must be interpreted cautiously.
If the number of violations is too high, then the number of false positives can be also high and will be disseminated in long lists, meaning that searching for them will become harder. As a consequence, the uncertainty on the result value compared to reality becomes significant and must be taken into account when working on them.

But, if the number of results is not too high, then false positives have a limited impact for the user’s work. Effectively, in this case, they are easily visible among the list of violations and the user can quickly identify and filter them to evaluate the risk incurred by the application that has been assessed. Moreover, even if a false violation is taken into account, generally it does not change the results and conclusion a lot.
And, to remain constructive, a false positive also means that what was searched is not so easy to find or can have several facets. This can be used to justify future investment to improve the SAM system!

Use static analysis tools to increase developers’ knowledge

Static code analysis is used more and more frequently to improve application software quality. Management and development teams put specific processes in place to scan the source code (automatically or not) and control the architecture of the applications they are in charge of. Multiple analyzers are deployed to parse the files that are involved in application implementation and configuration, and they generate results like lists of violations, ranking indexes, quality grades, and health factors.

Based on the information that is presented in dedicated tools like dashboards or code viewers, managers and team leaders can then decide which problems must be solved and the way the work has to be done. At the end of the analysis chain, action plans can be defined and shared with development teams to fix the problems. This is the first aspect of static analysis regarding application software quality, but this must not be the only one.
But what about the importance of understanding the problem?
The second aspect concerns the developer himself, as he is strongly impacted by quality improvement. Obviously he can use the results to fix any potential issues that have been identified, but he must understand the problem behind a violation and, above all, the remediation that must be applied to fix it.
Documentation, like coding guidelines, must be available and an analysis platform, like CAST AIP for instance, can provide him with a clear description of the rules that are violated. The documentation presents explanations about the problem and the associated remediation, with meaningful examples to illustrate how the violation is identified and how a possible remediation can be implemented. With that, the developer will be more confident regarding the tool, and he will spend less time finding the exact piece of code that has been pinpointed, and replace it with a better one in less time.
However, this aspect also concerns team leaders and mid-management. As I said above, they have to define action plans by taking into account the severity of problems and the bandwidth they have regarding development activities. The documentation is also important here, since it is going to help management understand the problems, their impact on the business, and the cost to fix them. Defining action plans is often a challenge for people who are not very technical. How do you select which violations need to be fixed first with limited resources?
Involving the actors
This second aspect is very important because it contributes to improving the developer’s knowledge of the technology and its impact on the software quality, which leads to decreased violations. I have already experienced developers in some shops who were not aware of tricky programming language behavior.
Developers wrote code with risky constructs without knowing the resulting behavior and induced defects in the software. This is especially true with complex programming languages, like C++ or framework-based environments like JEE. It’s also true when developers are newcomers with basic training and low experience. Thus, documentation with clear rationale, remediation, and with good examples that can be reused easily can be considered as a complement to training.
As I said in a previous post related to the developer’s choice, it is never too late to improve our knowledge and to avoid making the same mistake several times. Moreover, it is very frustrating to receive a list of violations (sometimes coming with unpleasant remarks!) without knowing what exactly the problem is and how to fix it. You get a notification saying there is a violation here, and so what? Frustration and misunderstanding often prevent involvement.
In addition to that, static code analysis could be an interesting way to make developers more involved in terms of software quality by producing better implementation. Also, it’s a good idea to search for similar cases that developers are aware of that may not have been detected by the static analyzers.
Finally, software quality improvement is not only a matter of tools searching for violations. It is also a matter of understanding and knowledge for all the actors who are involved in application development and maintenance. This is why the quality rule documentation, the coding guidelines, the architecture description, and easy access to these sources of information are very important and should be taken into consideration in software quality projects.

Does an IDE improve software quality?

Modern Integrated development environments (IDEs) are equipped with more and more tools to help developers code faster and better. Among these are plug-ins that allow developers to scan the source code for error-prone constructs, dangerous or deprecated statements, or practices that should be avoided. IDEs come in a variety of flavors — both free and commercial — but in all cases, developers can install them to improve the quality of the code they produce.
Some organizations encourage their developers to explore and deploy such tools, but as any good app developer knows, there is a difference between installing an app and using it consistently.
Installing a tool is one thing, using it is another
Even if an organization deploys an IDE, the results really depend on how the developers utilize their new tool. They can use it frequently and optimally; they can use it sometimes, when they don’t have anything else to do; they can never use it; or they can use it without taking results into account.
Each developer will have his own reasons for not adopting a new tool, but these are a few that I have seen in my experience:

The management team forces him to use it but without explaining why and how it can improve the development team by decreasing the number of errors. As a consequence, the tool may be rejected by the developer who is under pressure by the business and decides he does not need to lose time playing with a new toy.
He installs some tools, just to try them out, but with no clear objective to improve the quality of the code he produces. The tool might be fun, but when the real work starts, it’s less exciting.
He is not concerned by software quality and considers results that are delivered as a source of additional work. In this case, he deactivates rules, either to reduce the number of violations or because he thinks they are not relevant.

The more you know
When a developer is convinced of the benefit, the analysis tool will be used regularly and the results will be studied and followed attentively. As a result, the application quality will improve, the list of problems will decrease, and the quality of the work will become finer.
Moreover, if the tool provides a developer with documentation explaining why a best practice should be respected, what problem exists behind a quality rule, and how to fix it, then it is going to enrich his knowledge and competency regarding the technology he is working with. This aspect will also contribute to decrease the number of risky situations in the code base.
I’ve experienced this situation several times, but it’s always surprising to hear a developer say he was not aware of some very specific or tricky behavior available in the technology he’d been using for a long time. It is never too late to learn!
Keep your team focused on system-level violations
Let me take a step back and say that an IDE is not a turn-key solution for perfect software quality. However, it does allow your developers to keep a closer eye on the quality of their code, which in turn decreases the number of violations detected when analyzing the whole application (and the number can be huge!). Now your development team is free to focus on architectural problems caused by components, layers, and other interacting technologies it might have missed during development. This will allow your individual developers to contribute to your business objectives as best they can.
Choice for developers: be active or not
Despite such great tools, to err is human, and problems found in the code will still fall in the lap of developers. They have a simple choice to make: either actively contribute to the quality of the app, or hope any problems fall on the desk of other developers. One way this choice could be influenced is by properly introducing and explaining software quality to development teams.

Don’t Underestimate the Impact of Data Handling

For enterprise IT applications, it’s all about processing data defined through multiple types and in large volumes of code. Then the number of lines of code devoted to data handling is high enough to encapsulate a large number of software bugs that are waiting for specific events to damage the IT system and impact the business.
Even if we can say that a bug is a bug and it will be fixed when it occurs, bugs related to data handling should not be underestimated and this for several reasons:

Such bugs are generally not easy to detect among the millions of lines of code that constitute an application. They can be constituted by only one statement that is defined elsewhere, making some specificities in using it not immediately visible. They can also result from the execution of a given control flow associated to a given data flow.
Some of them can be there for a long time and will never occur. The problem is to identify which ones belong to this category to focus on others.
They can be activated by the conjunction of specific conditions that are not easy to identify.
When the issue occurs, the impact for business data can be severe: applications can stop, data can be corrupted, and end users and customer satisfaction can decrease.
Consequences are not always clearly visible and, in this case, few users detect them.

Problems are distributed
Issues can be hidden everywhere in application code. Risk management methodologies can help select the most sensible application areas and reduce the scope of the search. However, in most cases, detecting such potential issues requires the ability to check the types and structure of the data flowing from one component to another, or from a layer to another, as well as the algorithms implemented in your programming language of choice. This spells troubles for everyone.
Why does a bug activate suddenly?
There are different factors that contribute to activating a bug:

Probability increases with the number of lines of code.
The more a component is executed, the more its bugs can be activated.
The more you modify the code, the more likely an unexpected behavior can occur.
Low decoupling between data and treatments makes any changes on data impact the code.
Market pressure that stresses the development team. Working quickly is often a good way to create new bugs and activate existing ones!
Algorithm implementing business rules can be complex and distributed over multiple components, fostering the occurrence of bugs.
Functional data evolutions are not always taken into account in whole application implementation and can make code that is working well run in an erratic way.

The biggest challenge comes when several factors occur at the same time – a difficult challenge for any development team!
Various situations
The list of situations that can lead to troubles related to data handling is not short. For instance, database access can be made fragile when:

Database tables are modified by several components. Data modifications are usually ruled by the use of specific routines to update, insert, and delete a specific API or a data layer that is fully tested to maintain data integrity.
Host variable sizes are not correctly defined compare to database fields. Some queries can get a volume of data that is higher than expected. Or a change is made in the database structure and it has not been propagated to the rest of the application.

When manipulating variables and parameters, potential issues can be:

Type mismatches are generally insidious cases. For example, it can occur with implicit conversions that occur between two compatible pieces of data, such as the ones found in different SQL dialects, injecting incorrect values into the database. Similar situations can also be found in COBOL programs when alphanumeric fields are moved into numeric fields, leading to abnormal terminations if the target variable is used in calculation or simply in a computational format. Improper casts between C++ class pointers (ex: a base class to a child class) can lead to lost data and to data corruption propagated through I/O.
Data truncation when no control of variable size is done when moving one into another. Part of the value can be lost if the target variable is used to transport the information.
Incoherencies in functions or program calls between arguments sent by the caller and expected parameters. This can occur when a change done in the function or program interface has not been ported in all callers, making them terminate or corrupt data.

What about consequences?
Unfortunately, there is more than one type of consequence when such bugs activate. One of the big risks for the application is related to the corruption of the data it is manipulating — the worst case being when corruption is spreading throughout the IT system. Generally, this impacts users and the business.
I remember such situations with a banking application. Everything was working fine when the phone rang: “Hi, the numbers on my weekly reports don’t look right. I checked but it seems there is a problem. Can you check on your side?”
Well, we searched which programs generated the report but we did not find any interesting information. We checked its inputs and we found incorrect values. Then we looked at the program that produced these inputs, and finally we found the cause of the problem in a third program — a group of variables that were not correctly valuated.
Fortunately, the problem was detected and fixed. A more critical situation happens when very small corruptions install silently and insidiously over the IT system. They are too small or too dispersed to be pointed out. For instance, some decimal values that are improperly truncated or rounded can seem like a small issue, but in the end, the total can be significant!
Another consequence is related to application behavior. Bad development practices can rapidly lead an application to erratic behavior and sometimes termination. At last, some issues, like buffer overrun, can even lead to security vulnerabilities if the data is exposed to end users, especially in web applications.
Manual search …
Issues related to data handling are rarely discovered and anticipated when they are sought through manual and isolated operations only. The volume of code to look at, the number of data structures to check, the complex business rules to take into account, and the bug subtlety (that sometimes seems to be diabolic!) are serious obstacles for developers who cannot spend too much time to try and fix problems that might never occur.
… or automated system-level analysis?
The most efficient way to detect these types of issues is to analyze the whole application software with tools like CAST AIP to correlate findings concerning data structures with code logic. That can establish who calls who in the code, and can introspect components interacting in the data flow. Thus, the issue detection can be carried out faster, helping developers secure the code. It can be automated to regularly check the applications without disturbing the development team’s activities, allowing them to manage prevention at a lower cost.