Finding the right tools for the right challenge
The growing cost of most software development efforts can be traced back to one underlying cause – the lack of visibility into the software. As the size and system complexity grows for business critical applications — along with the complexity of sourcing environments — there is an increasing need for app owners, architects, and developers to truly understand their codebases. Without visibility into the implementation, it is hard for a developer to understand all the nuances of the code. This explains the disproportional amount of time that is needed for developers to identify the root cause of defects.
Most organizations have started to realize that code quality is an important root cause to many of their issues, whether it’s incident levels or time to value. The growing complexity of development environments in IT — the outsourcing, the required velocity, the introduction of Agile — have all raised the issue about code quality, sometimes to an executive level.
Business applications have always been complex. You can go back to the 70s, even the 60s, and hear about systems that have millions of lines of code. But here’s the rub: In those days it was millions of lines of COBOL or some other language. But it was all one language. All one system. All one single application in a nice, neat, tidy package.
We’ve made it a point on our blog to highlight the fact that software glitches in important IT systems — like NatWest and Google Drive — can no longer be “the cost of doing business” in this day and age. Interestingly, we’re starting to see another concerning trend: more and more crashes blamed on faulty hardware or network problems, while the software itself is ignored. It’s funny that the difference in incidents can be more than 10 times between applications with similar functional characteristics. Is it possible that the robustness of the software inside the applications has something to do with apparent hardware failures? I think I see a frustrated data center operator reading this and nodding violently.
The perimeter surrounding enterprise applications expanded exponentially since the birth of mobile and cloud, and IT security professionals are looking in all the wrong places to try and find a fix. Traditionally, organizations secured their data using a walled off perimeter — like the walls of a medieval castle — which contained a multitude of layers to help mitigate the risk of data compromise or exposure. The advent of mobile has altered that landscape dramatically, essentially opening up the front door of the castle and allowing that data to escape into unknown territory — the mobile device.
I’ll be presenting a webinar on this subject, Managing Security Risks with the Rise of Mobile and Cloud, on Feb. 28 at 11:00am EST, but I wanted to answer some questions you might have here on the CAST blog beforehand.
What new challenges exist in mobile application development?
The new paradigm of application development moves away from focusing our efforts on building an internally protected web application. Development now focuses on using the various mobile SDKs that exist to put web apps on mobile devices.
The problem is, those SDKs are made by “mom and pop shops” and do little to address the challenges of securing a mobile device. And as a result, we’re seeing “old vulnerabilities,” that have been discovered and remediated as part of the more traditional development methodology, starting to resurface.
Is the industry doing anything about it?
From a standards perspective, the industry hasn’t outlined the proper way for organizations to engineer and develop mobile applications. There are no formal methodologies or processes around Android or iOS that the industry can grab ahold of to help bring this challenge back to a more manageable state. It’s almost like what we saw in the 90’s with the Dot-com boom, but we’re seeing it now with mobile SDKs.
Is this a new trend? When did you hear about it?
I first caught wind of this trend towards the end of last year. I found out a client’s International division was using a third-party SDK to create its mobile business applications. The problem was that the SDK just wrapped a mobile aspect around a normal web app that then fed the data back to the client through the third party’s servers. That was scary on a lot of different levels.
What can organizations do to be prepared?
There are strategies that organizations can use to help determine what the overall risk of a particular application is before it goes to market. During the webinar, I will discuss a real-world case study with an organization that instituted an assurance program, and how that helped mitigate and control the risk that applications presented to their business.
With the advent of mobile devices, the threat vector for an organization has grown to an infinite level. IT leaders cannot put their trust in the mobile devices themselves to protect against the potential compromises of their data. Ultimately, they’re losing control of how those applications interact with their devices and, more importantly, how the data is communicated back to the organization.
The time has come for IT leaders to begin instituting stringent security controls and processes around how their mobile applications are being developed and secured. The traditional “castle” defense can no longer adequately protect an organization against the new threats facing it in the mobile and cloud landscapes.
For a more in-depth look into how to protect your organization, tune into our webinar, Managing Security Risks with the Rise of Mobile and Cloud, on Feb 28 at 11:00am EST. Even if you can’t make that time, you can still register for access to the recording when it is available.
These days, it doesn’t matter where I go or which media channel I watch, I hear about the same thing: cost reduction. From governments to households to companies, budgets are on a diet — saving is the new sacred word. Therefore, everyone must do more with much less. When it comes to companies, the first budget to shrink is usually the IT budget. But what can be cut, and how?
Sometimes, entire projects are stopped. With luck, maybe only features will be abandoned.
Sadly, lowering the cost to develop or maintain a project doesn’t lower customers’ expectations. In fact, those expectations increase as time passes and customers’ needs stay the same: They want better, faster, and more complete applications. And it goes without saying that the application has to be rock solid, because a deceived customer can quickly turn into an ex-customer.
But the “fat” has to be cut somewhere. Some would cut the budget on the lower elements of an application — the ones that the customers never see. But this is wrong. You see, an application is like a house. Skimping on the foundation creates an enormous risk of everything else collapsing.
So if you can’t save money on the foundation, you have to remove features. But, what is a house without rooms? Doesn’t removing features also remove the meaning of the project? As you can see, it is difficult to lower a project’s budget without putting it into jeopardy. So what is the miraculous solution? Unfortunately there’s no such thing.
A good start though would be minimizing the cost of heavy maintenance. This can be done by putting a lot of thought into what architecture would be the most efficient from the beginning.
That’s easier said than done! Firstly, defining what architecture is needed is difficult enough. Secondly, it is also difficult to respect the architecture when it comes time to code it. Sometimes, circumventing the architecture permits you to deliver a feature faster. But it defeats the purpose of the architecture and creates flaws in the software. To make sure that doesn’t happen, you need a tool where you can check that your delivered code respects the architecture model.
This tool is delivered by CAST and is called Architecture Checker.
Using CAST Architecture Checker gives a lot of benefits. With it, you will be able to validate the architecture of your application from the moment it’s thought up while choosing the level of detail you want.
And you will be able to validate how good your defined architecture is respected in the developed code whenever the code is modified using the generated set of quality rules. Every time a new feature is coded by the developers, you will make sure that your architecture is respected, permitting your applications to remain robust and easily maintainable. And you will keep your costs low.
In the end, an application is like any other thing. When you ensure that it stays intact during its creation, it doesn’t require costly down the road.
Static code analysis is used more and more frequently to improve application software quality. Management and development teams put specific processes in place to scan the source code (automatically or not) and control the architecture of the applications they are in charge of. Multiple analyzers are deployed to parse the files that are involved in application implementation and configuration, and they generate results like lists of violations, ranking indexes, quality grades, and health factors.
Based on the information that is presented in dedicated tools like dashboards or code viewers, managers and team leaders can then decide which problems must be solved and the way the work has to be done. At the end of the analysis chain, action plans can be defined and shared with development teams to fix the problems. This is the first aspect of static analysis regarding application software quality, but this must not be the only one.
But what about the importance of understanding the problem?
The second aspect concerns the developer himself, as he is strongly impacted by quality improvement. Obviously he can use the results to fix any potential issues that have been identified, but he must understand the problem behind a violation and, above all, the remediation that must be applied to fix it.
Documentation, like coding guidelines, must be available and an analysis platform, like CAST AIP for instance, can provide him with a clear description of the rules that are violated. The documentation presents explanations about the problem and the associated remediation, with meaningful examples to illustrate how the violation is identified and how a possible remediation can be implemented. With that, the developer will be more confident regarding the tool, and he will spend less time finding the exact piece of code that has been pinpointed, and replace it with a better one in less time.
However, this aspect also concerns team leaders and mid-management. As I said above, they have to define action plans by taking into account the severity of problems and the bandwidth they have regarding development activities. The documentation is also important here, since it is going to help management understand the problems, their impact on the business, and the cost to fix them. Defining action plans is often a challenge for people who are not very technical. How do you select which violations need to be fixed first with limited resources?
Involving the actors
This second aspect is very important because it contributes to improving the developer’s knowledge of the technology and its impact on the software quality, which leads to decreased violations. I have already experienced developers in some shops who were not aware of tricky programming language behavior.
Developers wrote code with risky constructs without knowing the resulting behavior and induced defects in the software. This is especially true with complex programming languages, like C++ or framework-based environments like JEE. It’s also true when developers are newcomers with basic training and low experience. Thus, documentation with clear rationale, remediation, and with good examples that can be reused easily can be considered as a complement to training.
As I said in a previous post related to the developer’s choice, it is never too late to improve our knowledge and to avoid making the same mistake several times. Moreover, it is very frustrating to receive a list of violations (sometimes coming with unpleasant remarks!) without knowing what exactly the problem is and how to fix it. You get a notification saying there is a violation here, and so what? Frustration and misunderstanding often prevent involvement.
In addition to that, static code analysis could be an interesting way to make developers more involved in terms of software quality by producing better implementation. Also, it’s a good idea to search for similar cases that developers are aware of that may not have been detected by the static analyzers.
Finally, software quality improvement is not only a matter of tools searching for violations. It is also a matter of understanding and knowledge for all the actors who are involved in application development and maintenance. This is why the quality rule documentation, the coding guidelines, the architecture description, and easy access to these sources of information are very important and should be taken into consideration in software quality projects.
Modern Integrated development environments (IDEs) are equipped with more and more tools to help developers code faster and better. Among these are plug-ins that allow developers to scan the source code for error-prone constructs, dangerous or deprecated statements, or practices that should be avoided. IDEs come in a variety of flavors — both free and commercial — but in all cases, developers can install them to improve the quality of the code they produce.
Some organizations encourage their developers to explore and deploy such tools, but as any good app developer knows, there is a difference between installing an app and using it consistently.
Installing a tool is one thing, using it is another
Even if an organization deploys an IDE, the results really depend on how the developers utilize their new tool. They can use it frequently and optimally; they can use it sometimes, when they don’t have anything else to do; they can never use it; or they can use it without taking results into account.
Each developer will have his own reasons for not adopting a new tool, but these are a few that I have seen in my experience:
The management team forces him to use it but without explaining why and how it can improve the development team by decreasing the number of errors. As a consequence, the tool may be rejected by the developer who is under pressure by the business and decides he does not need to lose time playing with a new toy.
He installs some tools, just to try them out, but with no clear objective to improve the quality of the code he produces. The tool might be fun, but when the real work starts, it’s less exciting.
He is not concerned by software quality and considers results that are delivered as a source of additional work. In this case, he deactivates rules, either to reduce the number of violations or because he thinks they are not relevant.
The more you know
When a developer is convinced of the benefit, the analysis tool will be used regularly and the results will be studied and followed attentively. As a result, the application quality will improve, the list of problems will decrease, and the quality of the work will become finer.
Moreover, if the tool provides a developer with documentation explaining why a best practice should be respected, what problem exists behind a quality rule, and how to fix it, then it is going to enrich his knowledge and competency regarding the technology he is working with. This aspect will also contribute to decrease the number of risky situations in the code base.
I’ve experienced this situation several times, but it’s always surprising to hear a developer say he was not aware of some very specific or tricky behavior available in the technology he’d been using for a long time. It is never too late to learn!
Keep your team focused on system-level violations
Let me take a step back and say that an IDE is not a turn-key solution for perfect software quality. However, it does allow your developers to keep a closer eye on the quality of their code, which in turn decreases the number of violations detected when analyzing the whole application (and the number can be huge!). Now your development team is free to focus on architectural problems caused by components, layers, and other interacting technologies it might have missed during development. This will allow your individual developers to contribute to your business objectives as best they can.
Choice for developers: be active or not
Despite such great tools, to err is human, and problems found in the code will still fall in the lap of developers. They have a simple choice to make: either actively contribute to the quality of the app, or hope any problems fall on the desk of other developers. One way this choice could be influenced is by properly introducing and explaining software quality to development teams.