How to Build the Best Action Plan for your Application

Applications are built on thousands, millions, maybe even tens of millions, lines of code. They are based on specific architecture gathering technologies, frameworks, and databases set up with their own specific architecture. If you have an action plan to improve your application on a specific issue, what will be your strategy?
Do you select one problem related to quality or take the opportunity to refactor part of your application? You know about issues coming from end users, but how do you address those inside the structure of your application?
I remember meeting with development teams and management who were trying to find the root cause of performance issues, as delays or malfunctioning of the application would severely impact the business. The old application was a mix of new and old technologies, and there was difficulty integrating the new user interface with the rest of the application.
They discussed for hours to decide which orientation they were going to take, and pressure was high. The team responsible for the user interface said that the old platform had to be rewritten, and of course the rest of the team complained that their part of the application worked well before the new interface appeared and it was the database structure that was causing performance issues. Management was totally lost, and didn’t know what to decide! And while this was going on, we could sense the business value decreasing.
We decided to analyze the entire application with CAST; using the transaction ranking method to identify issues regarding performance. The transactions are ranked using the sum of the high risks attached to each, along with the multiple methods and procedure calls.
In the CAST Dashboard, there is a view dedicated to list transactions from the end-user point of view. The transactions that appeared at the top of the list were precisely the transactions which had real performance issues. Then, it becomes easy to select the most critical defects inside one specific transaction and add them to your action plan.
These results, coming from a solution like CAST, were factual and not debatable. It highlighted the fact that defects correlated to performance issues were a combination of bad programming practices coming from different parts of the application.
We decided to work only on the transaction with the highest risk to measure improvement in terms of performance in production. In the end, all teams worked together because the root causes were links between loops bad practices, an improperly managed homemade framework datalayer, and a huge amount of data contained by tables without proper indexes.
This is just one successful way to build an effective action plan. What’s your experience? Do you have a success story about how you built an effective action plan? Be sure to share in a comment.
 

Cracking Open the Black Box of IT for CEOs

I spend some of my time with CEOs or CFOs, and time and again they tell me that IT is a black box that’s difficult, if not impossible, to measure. They can’t measure productivity. They can’t measure output. They can’t measure outcomes. They can’t measure risk. But, the thing they can measure is the IT cost.
Just this week the CEO of a well-known financial services company told me: “I have 2,000 people working in IT with a budget of $200 million a year, and yet I have no idea how the development teams are doing in relation to the competition, or if I’m even getting my money’s worth. And if I ask my CIO what’s going on, he just tells me he’s putting processes in place but it’s taking time — that creating software is a difficult art — and eventually making me understand I should let him manage his team because IT folks might not like the idea of having their work measured.”
I’m certain any CEOs reading this are nodding in agreement. The fact is, CEOs can do more than simply ask their CIOs or CTOs for status updates on major projects and initiatives, and gauge success based on whether deadlines are hit.  Like any other functions, IT performance, productivity, and value should be measured. The secret is to move away from status updates and into scientific measurement. As physicist John Grebe said, “If you cannot measure it, you cannot control it.” The good news is that today, the software aspect of the IT black box can be turned into a glass box. Here, software development can be measured in several ways, and KPIs can be established and benchmarked.
The bad news? Development teams, and sometimes IT leaders, reject measurement on the basis that they should be measured by their outcomes — does the system work, by complying with functional specs and end-users expectation, or doesn’t it — and not their performance. Performance measurement for software development is not maturing compared to marketing, finance, or manufacturing.
That’s too bad, because enterprise software development is no different than any complex industrial process. As an industrial process, it can, should, and must be measured. Testing, functionality, and outcomes are certainly valid measurements, but you can’t improve those outcomes and optimize their production if you are not visualizing the process in ways that are meaningful to both the technical and executive leadership. It’d be like waiting for the end of the assembly line to test if the manufactured products work.
And development teams can’t visualize the process until the CEO steps in and demands transparency into IT. CEOs must stop allowing their IT organizations to get away with a wait-and-see approach, and measure success based solely on outcomes.
IT is not only a huge business expense. Today, it’s the DNA of the organization.
Even though most CEOs can’t decode this DNA, they must understand how it evolves, and how that evolution impacts their organization. Louis Pasteur said, “A science is as mature as its measurement tools.” In computer science, the tools for measurement are available. And at CAST, the tools for visualizing those measurements and making them useful for CEOs has never been better. It’s now just a matter of getting CIOs and development teams to see measurement as a boon rather than a hindrance.
That isn’t always easy. Most CIOs are dealing with one of two scenarios: 1) they have a small group of coding gurus making their apps who have direct access to the business; or 2) they have huge development teams (in-house or outsourced) where a single programmer focuses on a big block of code, unaware of how it interacts with the rest of an application system.
Of course, coding gurus offer the greater pushback against measurement. Small teams of experts will say they’re lone wolfs who do things in their own super-powered way. This defense is known as the “coding cowboy” argument. It’s a red flag that must be eliminated if an organization is to reduce its spending on application development and maintenance, as well as maximize overall system quality. Even gurus must understand that transparency is a good thing for them, eventually.
For larger, distributed teams, the pushback is typically much smaller, and productivity programs are even welcomed from some software factories who’d like to show they do a better job than others. In both cases, the main, legitimate question is about the measures and the reliability. When one gets measured, one expects high precision.
The Consortium for IT Software Quality (CISQ) has defined a series of software characteristics and attributes that must be taken into account to offer a measurement framework that really works. Being CISQ, they receive support from the Software Engineering Institute, the Object Management Group, and dozens of big IT organizations. It is a complex framework, but offers precise measurement that can raise credibility and decrease pushback. The trap to avoid here is to believe that you can measure productivity and quality by counting lines of code and applying some quality checks offered by freeware or cheap code-quality tools. It’s appealing because it’s simple and sounds like a “good start,” but that just gives wrong indications, unfair measures, and unwanted behavior, making the situation even worse.
A CIO needs to get development teams to see software analytics and measurement as a way to quickly improve their work by visualizing workflows, practices, productivity, and quality. Moreover, they need to understand that the most sophisticated measurement systems today help make their work understandable to the CEO, and even more valuable to the organization. Once these changes are in place, CEOs will no longer see IT as a black box. It will become as manageable and measurable as any other part of the business.

Load Testing Fails Facebook IPO for NASDAQ

Are you load testing? Great. Are you counting on load testing to protect your organization from a catastrophic system failure? Well, that’s not so great.
I bet if you surveyed enterprise application development teams around the world and asked them how they would ensure the elasticity and scalability of an enterprise application, they’d answer (in unison, I’d imagine): load testing. Do it early and often.
Well, that was the world view at NASDAQ before the Facebook (NASDAQ: FB) IPO — perhaps the highest profile IPO of the past five years. And we saw how well that worked out for them. NASDAQ has reserved some $40 million to repay its clients after they mishandled Facebook’s market debut. But it doesn’t look like it’ll end there. Investors and financial firms are claiming that NASDAQ’s mistake cost them upwards of $500 million. And with impending legal action on the way, who knows what the final tally will be.
If ever there was a case for the religious faithful (like me) of application quality assessment to evangelize that failure to test your entire system holistically has a high potential cost, here you go. NASDAQ has hard numbers on the table.
NASDAQ readily admitted to extensive load testing before the iconic Facebook IPO, and it was confident, based on load test results, that its system would hold up. It was wrong, because its test results were wrong. And the final result was an epic failure that will go down in IPO history (not to mention the ledger books of a few ticked-off investors).
Now, I’m not saying that you shouldn’t be load testing your applications. You should. But load testing is something that happens at the later stages of your application development process, and before the significant events you are anticipating might cause a linear or even exponential increase in capacity demands to your website or application.
However, load testing gets hung up in its prescriptive approach to user patterns. You create tests that assume prescribed scenarios which the application will have to deal with. Under this umbrella of testing, the unanticipated is never anticipated. You’re never testing the entire system. You are always testing a sequence of events.
But there is a way to test for the unexpected, and that’s (wait for it) application quality assessment and performance measurement. If NASDAQ had analyzed its entire system holistically in addition to traditional load testing, it would have (1) dramatically reduced the number of scenarios it needed to load test, and (2) figured out which type of user interaction could have broken the application when it was deployed.
Application quality assessment goes beyond the simple virtualization of large user loads. It drills into an application, down to the source code, to look for its weak points — usually where the data is flying about — and measures its adherence to architectural and coding standards. You wouldn’t test the structural integrity of a building by jamming it with the maximum number of occupants; you’d assess its foundations and its engineering.
NASDAQ could have drilled down into the source code of its system and identified and eliminated dangerous defects early on. This would have led to a resilient application that wouldn’t bomb when the IPO went live, and would have saved the company the $40 to $500 million we’re estimating it’s exposed to now for its defective application.
At the end of the day, quality assessment can succeed where load testing can, and has, failed. Had NASDAQ considered software quality analysis before Facebook had gone public, there’s a good chance it would still have $40 million burning a hole in its pocket. However, our friends at NASDAQ load tested how they thought users would be accessing their systems, then sat back in anticipation of arguably the most anticipated IPO in recent history. Little did they know they would be the ones making headlines the next day.