Anybody reading this post probably thinks they know all they need to know about component-based development. But in my experience I have found that many organizations don’t when it comes to complete component testing. There is one crucial aspect of component-based development that is potentially damaging to your career and to your company. But before I go there, let’s rehash why development teams love component-based development.
Component-based development is a rules-based approach to defining, implementing, and composing independent components into a software system. What do we get out of this? Well, we get higher reuse, because once it’s built, many of the components can be reused again and again and again in different applications.
We’re essentially creating a box of building blocks that can be reused, ensuring each future app has less to build anew. It also makes development much faster, because instead of having to reinvent the wheel, we simply reuse the wheel. We only add those pieces that are new and specific to the current application’s requirement, and which have not been invented before.
But here’s the rub: In this software engineering discipline, there doesn’t seem to be as much of an understanding of component-based development from a testing perspective. Sure, people think they are testing their apps. But in fact, in many cases they’re testing the individual components and not the app itself.
The deeper your team gets into the componentization of your applications, the more they need to be taking what I would call a systematic approach to understanding the behaviors of the application, and therefore the performance of the application — performance meaning elasticity, scalability, and speed among other important factors.
Today, many organizations are just throwing applications over the wall to testers and having them beat on components at a granular or higher level, usually manually or with scripting. But this shifts the focus to just the percentage of the application tested, as opposed to taking a holistic approach to testing the full application and literally all of its interactions with components and with other systems — databases, servers, interfaces, etc. — that the applications is dependent and capitalizing on. I call this approach generic testing, because in the end, companies spend tremendous amounts of money without getting maximum ROI back from their testing efforts.
Worse, organizations assume that once their suite of test cases pass they are risk free. That’s because many organizations haven’t matured from an overall testing standpoint, and don’t understand everything they need to look for. Many in fact, think that because they’re using CBD, they’re actually improving quality by definition.
And as the landscape of their apps continues to change and grow more complex, the need for really robust (robust meaning a systematic, not generic, approach) testing becomes more apparent. So even if an individual component is perfect, you still need to know how the test shows all of these components working together. Does it show whether or not somebody is, for example, copying and pasting the code from one component into another instead or properly using interfaces?
Today, a large percentage of business technology consumers are thinking along those lines, and it can lead to significant problems. If an organization throws everything up into the cloud and doesn’t really understand what they’re throwing up there, they’re going to wake up with the same problems.
So what can your development team do to avoid these issues? How does your team ensure the optimal structural quality of your finished apps? Let’s start with what you shouldn’t do. Don’t stop doing all the testing you’re doing. It all feeds into the end result. But you’ve got to add what’s missing: the capability to take a systematic end-to-end X-ray of the apps’ structural integrity.
This is not the end all be all, this is simply a crucial missing piece in almost all application development today. We believe CAST is a vendor with strong tools to solve this problem. But this post is not about products. This is a post to get you to realize that you can work smarter, not harder, at identifying bugs, security issues, reliability issues, capacity issues, privacy issues, and etc. by closing the gap.
This doesn’t mean that you have to instrument every API and create more work for yourself. The tools should be doing this automatically for you. Just like your load testing tool automatically synthesizes users, your system’s ability to visualize integrity should be done on an automated basis, as well.
As Crusty the Clown from the Simpsons said, “It’s not just good, it’s good enough.” So organizations need to pass the Crusty test. Their testing solution might be good, but are they good enough to insure your applications will be high quality function as expected? Most don’t know, because they haven’t been taking a systematic, holistic approach to evaluating and testing their components and the applications they create.