Crusty the Clown might know more than your development team does about software testing

Anybody reading this post probably thinks they know all they need to know about component-based development. But in my experience I have found that many organizations don’t when it comes to complete component testing. There is one crucial aspect of component-based development that is potentially damaging to your career and to your company. But before I go there, let’s rehash why development teams love component-based development.
Component-based development is a rules-based approach to defining, implementing, and composing independent components into a software system. What do we get out of this? Well, we get higher reuse, because once it’s built, many of the components can be reused again and again and again in different applications.
We’re essentially creating a box of building blocks that can be reused, ensuring each future app has less to build anew. It also makes development much faster, because instead of having to reinvent the wheel, we simply reuse the wheel. We only add those pieces that are new and specific to the current application’s requirement, and which have not been invented before.
But here’s the rub: In this software engineering discipline, there doesn’t seem to be as much of an understanding of component-based development from a testing perspective. Sure, people think they are testing their apps. But in fact, in many cases they’re testing the individual components and not the app itself.
The deeper your team gets into the componentization of your applications, the more they need to be taking what I would call a systematic approach to understanding the behaviors of the application, and therefore the performance of the application — performance meaning elasticity, scalability, and speed among other important factors.
Today, many organizations are just throwing applications over the wall to testers and having them beat on components at a granular or higher level, usually manually or with scripting. But this shifts the focus to just the percentage of the application tested, as opposed to taking a holistic approach to testing the full application and literally all of its interactions with components and with other systems — databases, servers, interfaces, etc. — that the applications is dependent and capitalizing on. I call this approach generic testing, because in the end, companies spend tremendous amounts of money without getting maximum ROI back from their testing efforts.
Worse, organizations assume that once their suite of test cases pass they are risk free. That’s because many organizations haven’t matured from an overall testing standpoint, and don’t understand everything they need to look for. Many in fact, think that because they’re using CBD, they’re actually improving quality by definition.
And as the landscape of their apps continues to change and grow more complex, the need for really robust (robust meaning a systematic, not generic, approach) testing becomes more apparent.  So even if an individual component is perfect, you still need to know how the test shows all of these components working together. Does it show whether or not somebody is, for example, copying and pasting the code from one component into another instead or properly using interfaces?
Today, a large percentage of business technology consumers are thinking along those lines, and it can lead to significant problems. If an organization throws everything up into the cloud and doesn’t really understand what they’re throwing up there, they’re going to wake up with the same problems.
So what can your development team do to avoid these issues? How does your team ensure the optimal structural quality of your finished apps? Let’s start with what you shouldn’t do. Don’t stop doing all the testing you’re doing. It all feeds into the end result. But you’ve got to add what’s missing: the capability to take a systematic end-to-end X-ray of the apps’ structural integrity.
This is not the end all be all, this is simply a crucial missing piece in almost all application development today. We believe CAST is a vendor with strong tools to solve this problem. But this post is not about products. This is a post to get you to realize that you can work smarter, not harder, at identifying bugs, security issues, reliability issues, capacity issues, privacy issues, and etc. by closing the gap.
This doesn’t mean that you have to instrument every API and create more work for yourself. The tools should be doing this automatically for you. Just like your load testing tool automatically synthesizes users, your system’s ability to visualize integrity should be done on an automated basis, as well.
As Crusty the Clown from the Simpsons said, “It’s not just good, it’s good enough.” So organizations need to pass the Crusty test. Their testing solution might be good, but are they good enough to insure your applications will be high quality function as expected? Most don’t know, because they haven’t been taking a systematic, holistic approach to evaluating and testing their components and the applications they create.
 

Don’t Wait For Load Testing to Find Performance Issues

We all know testing is an essential step in the application development process. But sometimes testing can feel like your team is just throwing bricks against a wall and seeing when the wall breaks. Wouldn’t it make more sense to be measuring the integrity of the wall itself before chucking things at it?
Consider load testing, where you synthesize a bunch of virtual users and throw them at the application. You’re looking to see how well the application deals with the elasticity and scalability demands. If your team is doing load testing without first testing the structural integrity of the application, however, they’re putting the cart before the horse.
Before animating zillions of synthetic users, let’s first examine how the application interacts with one user, with itself, and with other systems in the ecosystem. How is that user’s data being transferred around the application? Is it getting stuck in a coding loop that could lead to problems down the line?
Next, what about security? A key part of structural integrity is application integrity, which revolves around the security and performance of the application source code. Security testing might focus too much on input validation and not enough on solid architectural design and proper control of access to confidential data.
Architecture: This is often the most important piece of a custom application. A study published by Addison-Wesley Professional found over 50 percent of security issues are the result of poor architectural design. That said, I’ve seen outmoded applications that still have a pretty good multi-tier, secure architecture. Give those guys a pat on the back! Even though the application overall is outmoded, the ability to leverage a good security layer in a multi-tier architecture — where every tier does its own validation and is independent of the other — is a crucial advantage. Using CAST’s analysis tools, you can determine the architectural quality, security risk, and adherence to the organization’s standards, and measure improvements to it.
Data access: After the proper architecture is in place, the team needs to ensure data can move around smoothly, and only go or rest where it needs to, and nowhere else. Using CAST’s analysis tools, for example, the development team can link all the places where the application is interacting with the organization’s data storage, such as a database or a persistence layer. Any place where the application is interacting with the organization’s data store in a way which is unexpected or otherwise “off the reservation” can be highlighted. Often, CAST finds that the application is directing data from too many places. For example, an application’s user interface layer should never be accessing the database. It should always go to a dedicated data access layer. And yet I see this error all the time. Now suppose you have a customer table with 20 different routines which are inserting, updating, or deleting data—well, that’s also a problem! The application should have a single component (or routine) that interacts with the customer table, and all other routines use it to centralize the system’s data actions. Unless you can visualize the structural integrity of the application, however, you’ll never know if the team is adhering to that best design practice.
These types of issues might seem minor. But left undiagnosed, they can lead to a poorly performing application that’s taxing system performance and driving up maintenance and other costs. Moreover, load testing done at the later phases of the application development process, before launching an update, or before lighting up a migration (i.e., internal data center to the cloud), won’t find any of these issues unless they are load intolerant.
It will just tell you that the system doesn’t scale at some targeted level, and then it’s up to the team to go figure out why and fix it. If your team tests the structural integrity of the application before the load testing phase, latent performance, architectural, security, and other issues will become visible before the first synthetic user is even generated.
 

Load Testing Fails Facebook IPO for NASDAQ

Are you load testing? Great. Are you counting on load testing to protect your organization from a catastrophic system failure? Well, that’s not so great.
I bet if you surveyed enterprise application development teams around the world and asked them how they would ensure the elasticity and scalability of an enterprise application, they’d answer (in unison, I’d imagine): load testing. Do it early and often.
Well, that was the world view at NASDAQ before the Facebook (NASDAQ: FB) IPO — perhaps the highest profile IPO of the past five years. And we saw how well that worked out for them. NASDAQ has reserved some $40 million to repay its clients after they mishandled Facebook’s market debut. But it doesn’t look like it’ll end there. Investors and financial firms are claiming that NASDAQ’s mistake cost them upwards of $500 million. And with impending legal action on the way, who knows what the final tally will be.
If ever there was a case for the religious faithful (like me) of application quality assessment to evangelize that failure to test your entire system holistically has a high potential cost, here you go. NASDAQ has hard numbers on the table.
NASDAQ readily admitted to extensive load testing before the iconic Facebook IPO, and it was confident, based on load test results, that its system would hold up. It was wrong, because its test results were wrong. And the final result was an epic failure that will go down in IPO history (not to mention the ledger books of a few ticked-off investors).
Now, I’m not saying that you shouldn’t be load testing your applications. You should. But load testing is something that happens at the later stages of your application development process, and before the significant events you are anticipating might cause a linear or even exponential increase in capacity demands to your website or application.
However, load testing gets hung up in its prescriptive approach to user patterns. You create tests that assume prescribed scenarios which the application will have to deal with. Under this umbrella of testing, the unanticipated is never anticipated. You’re never testing the entire system. You are always testing a sequence of events.
But there is a way to test for the unexpected, and that’s (wait for it) application quality assessment and performance measurement. If NASDAQ had analyzed its entire system holistically in addition to traditional load testing, it would have (1) dramatically reduced the number of scenarios it needed to load test, and (2) figured out which type of user interaction could have broken the application when it was deployed.
Application quality assessment goes beyond the simple virtualization of large user loads. It drills into an application, down to the source code, to look for its weak points — usually where the data is flying about — and measures its adherence to architectural and coding standards. You wouldn’t test the structural integrity of a building by jamming it with the maximum number of occupants; you’d assess its foundations and its engineering.
NASDAQ could have drilled down into the source code of its system and identified and eliminated dangerous defects early on. This would have led to a resilient application that wouldn’t bomb when the IPO went live, and would have saved the company the $40 to $500 million we’re estimating it’s exposed to now for its defective application.
At the end of the day, quality assessment can succeed where load testing can, and has, failed. Had NASDAQ considered software quality analysis before Facebook had gone public, there’s a good chance it would still have $40 million burning a hole in its pocket. However, our friends at NASDAQ load tested how they thought users would be accessing their systems, then sat back in anticipation of arguably the most anticipated IPO in recent history. Little did they know they would be the ones making headlines the next day.