Last Wednesday we hosted a webinar featuring Steve Naylor of Solvay.
During the webinar, Mr. Naylor illustrated how gaining visibility into critical SAP systems has helped maintain uninterrupted services and reduced development costs.
Mr. Naylor also shared how software analysis and measurement:
Provides insight into legacy ABAP systems
Identifies security issues not detected by native SAP code analysis tools
Reveals hidden software complexity that can cause performance and stability issues
The sequestration has hit a lot of organizations hard, and IT intensive programs aren’t ducking the proverbial bullet. In the decade since 9-11, organizations had more money and resources to give to development teams to fix their application’s performance issues. But now that the nation is trying to fix its fiscal woes, every day and dollar counts.
Applications are built on thousands, millions, maybe even tens of millions, lines of code. They are based on specific architecture gathering technologies, frameworks, and databases set up with their own specific architecture. If you have an action plan to improve your application on a specific issue, what will be your strategy?
Do you select one problem related to quality or take the opportunity to refactor part of your application? You know about issues coming from end users, but how do you address those inside the structure of your application?
I remember meeting with development teams and management who were trying to find the root cause of performance issues, as delays or malfunctioning of the application would severely impact the business. The old application was a mix of new and old technologies, and there was difficulty integrating the new user interface with the rest of the application.
They discussed for hours to decide which orientation they were going to take, and pressure was high. The team responsible for the user interface said that the old platform had to be rewritten, and of course the rest of the team complained that their part of the application worked well before the new interface appeared and it was the database structure that was causing performance issues. Management was totally lost, and didn’t know what to decide! And while this was going on, we could sense the business value decreasing.
We decided to analyze the entire application with CAST; using the transaction ranking method to identify issues regarding performance. The transactions are ranked using the sum of the high risks attached to each, along with the multiple methods and procedure calls.
In the CAST Dashboard, there is a view dedicated to list transactions from the end-user point of view. The transactions that appeared at the top of the list were precisely the transactions which had real performance issues. Then, it becomes easy to select the most critical defects inside one specific transaction and add them to your action plan.
These results, coming from a solution like CAST, were factual and not debatable. It highlighted the fact that defects correlated to performance issues were a combination of bad programming practices coming from different parts of the application.
We decided to work only on the transaction with the highest risk to measure improvement in terms of performance in production. In the end, all teams worked together because the root causes were links between loops bad practices, an improperly managed homemade framework datalayer, and a huge amount of data contained by tables without proper indexes.
This is just one successful way to build an effective action plan. What’s your experience? Do you have a success story about how you built an effective action plan? Be sure to share in a comment.
We all know testing is an essential step in the application development process. But sometimes testing can feel like your team is just throwing bricks against a wall and seeing when the wall breaks. Wouldn’t it make more sense to be measuring the integrity of the wall itself before chucking things at it?
Consider load testing, where you synthesize a bunch of virtual users and throw them at the application. You’re looking to see how well the application deals with the elasticity and scalability demands. If your team is doing load testing without first testing the structural integrity of the application, however, they’re putting the cart before the horse.
Before animating zillions of synthetic users, let’s first examine how the application interacts with one user, with itself, and with other systems in the ecosystem. How is that user’s data being transferred around the application? Is it getting stuck in a coding loop that could lead to problems down the line?
Next, what about security? A key part of structural integrity is application integrity, which revolves around the security and performance of the application source code. Security testing might focus too much on input validation and not enough on solid architectural design and proper control of access to confidential data.
Architecture: This is often the most important piece of a custom application. A study published by Addison-Wesley Professional found over 50 percent of security issues are the result of poor architectural design. That said, I’ve seen outmoded applications that still have a pretty good multi-tier, secure architecture. Give those guys a pat on the back! Even though the application overall is outmoded, the ability to leverage a good security layer in a multi-tier architecture — where every tier does its own validation and is independent of the other — is a crucial advantage. Using CAST’s analysis tools, you can determine the architectural quality, security risk, and adherence to the organization’s standards, and measure improvements to it.
Data access: After the proper architecture is in place, the team needs to ensure data can move around smoothly, and only go or rest where it needs to, and nowhere else. Using CAST’s analysis tools, for example, the development team can link all the places where the application is interacting with the organization’s data storage, such as a database or a persistence layer. Any place where the application is interacting with the organization’s data store in a way which is unexpected or otherwise “off the reservation” can be highlighted. Often, CAST finds that the application is directing data from too many places. For example, an application’s user interface layer should never be accessing the database. It should always go to a dedicated data access layer. And yet I see this error all the time. Now suppose you have a customer table with 20 different routines which are inserting, updating, or deleting data—well, that’s also a problem! The application should have a single component (or routine) that interacts with the customer table, and all other routines use it to centralize the system’s data actions. Unless you can visualize the structural integrity of the application, however, you’ll never know if the team is adhering to that best design practice.
These types of issues might seem minor. But left undiagnosed, they can lead to a poorly performing application that’s taxing system performance and driving up maintenance and other costs. Moreover, load testing done at the later phases of the application development process, before launching an update, or before lighting up a migration (i.e., internal data center to the cloud), won’t find any of these issues unless they are load intolerant.
It will just tell you that the system doesn’t scale at some targeted level, and then it’s up to the team to go figure out why and fix it. If your team tests the structural integrity of the application before the load testing phase, latent performance, architectural, security, and other issues will become visible before the first synthetic user is even generated.