Modernize QA with Automated Structural Quality Gates

Just like a species of insects can become resistant to a certain type of pesticide, a new strain of software bugs has emerged and is plaguing software developers and wreaking havoc on software quality — architecturally complex violations. Unlike a code-level bug, a system level defect involves interactions between several components, often spread across different levels of an application, making them much more difficult to find and fix.
And even though these types of violations only account for 10 percent of the total number of defects, they lead to ninety percent of the production issues — severely impacting software quality and technical debt.

Executive IT Decision Making: Am I Properly Measuring IT Risk?

Business decisions made in organizations today are moving away from a gut feeling and more towards data-driven, objective decision making. But what can IT do to prepare? For IT to have true visibility and scientific decision-making into their application development they need to have a view at the end product itself — its stability, robustness, performance, security, and development velocity.
Developers are currently equipped with tools to help them “spell check” their pieces of the code, but the issue is they’re not looking at the larger picture — the architectural and structural vulnerabilities that remain hidden in the complex, multi-tiered modern day frameworks. Imagine trying to understand a great novel’s plot line and story structure by reading randomly ripped out pages from the book.
Our very own Konstantin Berger breaks down how an IT organization, equipped with an application portfolio analyzer like CAST’s AIP, can reduce its IT risk, produce higher quality applications, and help the business budget better based on application health, maintainability, and lifecycle. Hear what he has to say in the video below.

Executives, Management, and Testers: Are You Aligned?

What draws me to Anaheim, Calif., in October is not the walking Disney characters (though there are plenty of those), but instead the STARWest, the West Coast’s largest conference on software testing analysis and review.
After a day listening to James Bach teach critical thinking for testers, I wake up extra early on Tuesday to attend the opening keynote — Michael D. Kelly and Jeanette Thebeau discussing “What Executives Value in Testing.”
I know Michael well; he is a former president of the Association for Software Testing who worked as a test manager at Liberty Mutual, a Fortune 500 company, then went on to start his own company that is going well. His partner, Jeanette, is a bit of wild card; her background is in working with executives to shape the business message.
With this pair, I’m not quite sure what to expect.

Extending Agile To The Left

At a time when other conferences are splitting into smaller and smaller regional and micro-tech events, the Agile Conference, with its 1,700 attendees, stands alone.
Alone and overwhelming. The event had sixteen different tracks spanning everything from DevOps to coaching and mentoring, leadership, and lean startup to classic elements like development, testing, and quality assurance.
Not to mention the vendor booths, the Stalwarts Stage (where experts “just” answered questions for 75 minutes), the four-day boot camp for beginners, and the academic track. The 215 sessions brought one word to mind: overwhelming.
Instead of focusing on one track or concept, I spent my time at the conference looking for themes and patterns. What surprised me was where I found those ideas — to the left, in product design, and to the right, in DevOps, not in the middle, in classic software.

Maintaining software quality on the bleeding edge

From IT’s perspective, the business is always asking for new applications — apps to innovate, or simply make their jobs a little easier. The problem is, it always want them done quickly and be up and running perfectly at launch.
Our very own Dr. Bill Curtis sat down with ComputerWeekly to discuss the challenges of maintaining this level of development while simultaneously ensuring the application’s performance and resiliency. You can hop over to ComputerWeekly to read the full article here.

Raymond James’ Aha! Moment with Integrating Software Quality

When my organization decided to hire a new CTO, one of his top priorities was to look through our old support contracts and “cut the fat,” as it were. It was there, among the rubble, where we found a transformational tool that we had cast aside which could help us increase our development productivity and software quality. But in learning more about this tool we found that it hadn’t failed us, but rather, we failed it!
So my brand-new boss gave me a brand-new ultimatum: Integrate this tool into our software development lifecycle, or we’re dumping it.
The tool was CAST’s Application Intelligence Platform (AIP), used to increase an application’s structural quality during development. Previous teams had struggled with this tool because they thought of it as a plug-and-play solution that would fix all their problems, instead of an integral part of the development lifecycle. And when it didn’t work, all too often it was the tool itself that would get blamed. We knew we had to figure out a way to alter development’s perception of software quality in order to get this project off the shelf.
We started scanning applications with CAST, and immediately hit a roadblock. The reports were coming back with a lot of violations and many developers started to panic thinking, “the sky is falling!” We had to take a step back and explain that tackling every violation would be costly and ineffective — only critical violations were going to swing the quality needle. After those violations were fixed and rescanned, their quality scores improved by 30 percent … this would be our Aha! moment.
But even with the improved scores, we found our quality processes were still not being integrated into the development lifecycle. So we decided to stop making it optional. We asked senior management for a list of our most critical business applications, and we started approaching teams like police with a search warrant. With a swift knock on their door we could say, “Your application has been deemed critical by our senior leadership, and we’re here to check the code.”
With this approach, we got no objections at all. Teams made time because they understood that their app was critical, and we needed to capture specific quality metrics, such as complexity and transferability, as well as how well our development team coded to not only our own standards, but those of the rest of the industry. What’s more, after their application had been analyzed once, they could never release new code that was lower quality than the previous version. This made it very easy to implement a quality gate; one that was objective instead of anecdotal, and is supported by new violations popping up in the code.
We learned a lot in our first year — we scanned 12 applications as we refined our process even more. And development teams began to see how these scans could help them produce more efficient apps. Today, we’ve been able to make significant progress in our entire application portfolio, having increased the footprint of scans to 58 apps. In just over a year’s time, CAST had become an integral part of our software development lifecycle.
Looking forward, our next steps are to automate the process as part of a continuous delivery effort to increase deployment speed. We also plan to partner with software engineering managers to objectively determine who our best coders are across the board, and who we need to focus coaching on.
Lots of companies like to talk about software quality, but there aren’t many that live it, breathe it, and love it. But if you really want to get software quality assurance working in your organization, it needs to be institutionalized into your entire development process. That’s the power of the CAST AIP — the more we worked with it, the more difficult it’s been to develop quality software without it.

How to Build the Best Action Plan for your Application

Applications are built on thousands, millions, maybe even tens of millions, lines of code. They are based on specific architecture gathering technologies, frameworks, and databases set up with their own specific architecture. If you have an action plan to improve your application on a specific issue, what will be your strategy?
Do you select one problem related to quality or take the opportunity to refactor part of your application? You know about issues coming from end users, but how do you address those inside the structure of your application?
I remember meeting with development teams and management who were trying to find the root cause of performance issues, as delays or malfunctioning of the application would severely impact the business. The old application was a mix of new and old technologies, and there was difficulty integrating the new user interface with the rest of the application.
They discussed for hours to decide which orientation they were going to take, and pressure was high. The team responsible for the user interface said that the old platform had to be rewritten, and of course the rest of the team complained that their part of the application worked well before the new interface appeared and it was the database structure that was causing performance issues. Management was totally lost, and didn’t know what to decide! And while this was going on, we could sense the business value decreasing.
We decided to analyze the entire application with CAST; using the transaction ranking method to identify issues regarding performance. The transactions are ranked using the sum of the high risks attached to each, along with the multiple methods and procedure calls.
In the CAST Dashboard, there is a view dedicated to list transactions from the end-user point of view. The transactions that appeared at the top of the list were precisely the transactions which had real performance issues. Then, it becomes easy to select the most critical defects inside one specific transaction and add them to your action plan.
These results, coming from a solution like CAST, were factual and not debatable. It highlighted the fact that defects correlated to performance issues were a combination of bad programming practices coming from different parts of the application.
We decided to work only on the transaction with the highest risk to measure improvement in terms of performance in production. In the end, all teams worked together because the root causes were links between loops bad practices, an improperly managed homemade framework datalayer, and a huge amount of data contained by tables without proper indexes.
This is just one successful way to build an effective action plan. What’s your experience? Do you have a success story about how you built an effective action plan? Be sure to share in a comment.