Royal Bank of Scotland’s IT Failure Exposes Vulnerabilities in Digital Banking

Last Wednesday the Royal Bank of Scotland (RBS) underwent an IT failure which withheld 600,000 payments from customer accounts. This occurs seven months after RBS was fined ₤56 million due to an IT Crash in 2012 that impeded customers from accessing their online accounts. The poor system performance has caused difficulties for customers and shock from the banking community.

IT Transformation Webinar Questions Answered

During last week’s webinar on IT Transformation featuring Marc Cecere, vice president and principal analyst for Forrester Research, many questions presented by participants went unanswered due to time constraints. Because these questions are likely being asked by many in the IT arena, we asked Marc’s webinar co-host, Pete Pizzutillo of CAST to provide answers to the three most frequently asked questions.

How to Build the Best Action Plan for your Application

Applications are built on thousands, millions, maybe even tens of millions, lines of code. They are based on specific architecture gathering technologies, frameworks, and databases set up with their own specific architecture. If you have an action plan to improve your application on a specific issue, what will be your strategy?
Do you select one problem related to quality or take the opportunity to refactor part of your application? You know about issues coming from end users, but how do you address those inside the structure of your application?
I remember meeting with development teams and management who were trying to find the root cause of performance issues, as delays or malfunctioning of the application would severely impact the business. The old application was a mix of new and old technologies, and there was difficulty integrating the new user interface with the rest of the application.
They discussed for hours to decide which orientation they were going to take, and pressure was high. The team responsible for the user interface said that the old platform had to be rewritten, and of course the rest of the team complained that their part of the application worked well before the new interface appeared and it was the database structure that was causing performance issues. Management was totally lost, and didn’t know what to decide! And while this was going on, we could sense the business value decreasing.
We decided to analyze the entire application with CAST; using the transaction ranking method to identify issues regarding performance. The transactions are ranked using the sum of the high risks attached to each, along with the multiple methods and procedure calls.
In the CAST Dashboard, there is a view dedicated to list transactions from the end-user point of view. The transactions that appeared at the top of the list were precisely the transactions which had real performance issues. Then, it becomes easy to select the most critical defects inside one specific transaction and add them to your action plan.
These results, coming from a solution like CAST, were factual and not debatable. It highlighted the fact that defects correlated to performance issues were a combination of bad programming practices coming from different parts of the application.
We decided to work only on the transaction with the highest risk to measure improvement in terms of performance in production. In the end, all teams worked together because the root causes were links between loops bad practices, an improperly managed homemade framework datalayer, and a huge amount of data contained by tables without proper indexes.
This is just one successful way to build an effective action plan. What’s your experience? Do you have a success story about how you built an effective action plan? Be sure to share in a comment.
 

The Personnel Side of Technical Debt

I have been an East-Coaster all my life. I’ve lived, worked and even attended college in states that all lie East of the Mississippi. However, throughout my 18 years working in the technology business, my clients have been spread out around the U.S. and abroad. I’ve found myself doing phone calls before the sun rises and well after it has set. That’s just the way it is in this business.
While it is admittedly easier to write about companies that are located in another state, I the remote worker hardly begins and ends with us writers. More often than not I’m working with companies that have developers, architects, managers, directors and even executives spread out over multiple locations. One Canadian-based division of a company I represented showed a real sense of panache and even went so far as to build a robot with an embedded web cam to allow one of its developers to move to another province while still maintaining a “physical” presence in the office.
But the truth of the matter is that communication between people not within a single office is still problematic. You just cannot possibly be as free and open via email or even over the phone when it comes to scheduling meetings and sharing ideas.
As Johanna Rothman points out in her Managing Product Development blog:
“Let’s assume you have what I see in my consulting practice: an architecture group in one location, or an architect in one location and architects around the world; developers and “their” testers in multiple time zones; product owners separated from their developers and testers. The program is called agile because the program is working in iterations. And, because it’s a program, the software pre-existed the existence of the agile transition in the organization, so you have legacy technical debt up the wazoo (the technical term). What do you do?”
It’s Personal…and It Isn’t
Being the good consultant and managerial-type she is, Rothman offers a number of ways to mitigate issues between dislocated teams of developers. Her ideas are solid and very much in tune with promoting a good work environment for developers. These ideas encompass many good practices. She suggests establishing teams so that developers can’t be borrowed between offices and ensuring that each team has a specific task or feature on which they always work. She says to establish specific goals for each team and espouses the need to take extra time to map out what needs to be tested in each iteration so that teams in different time zones are not inconvenienced by conference calls outside their zone’s normal work hours.
All of these have great potential to make teams work together better…I’m not sure they solve the problem of the structural quality of the software being developed, though. The problem there is similar to something I learned in my eighth grade Chemistry class. There I learned that every vessel you use to measure something has a certain amount of inaccuracy to it. Therefore, when you use multiple vessels to measure something, you increase the level of inaccuracy for that measurement.
Similarly, when you split up a project, each team yields a certain level of control to its counterparts in other offices and geographic differences – whether teams are split between areas of the U.S. or other countries – play into what goes into the software and how code is written. Those are issues that cannot be resolved easily when you bring the various portions of a project together. However, if an organization implements an across-the-board requirement that all teams perform some form of structural analysis throughout the process, it can potential technical debt as well as provide a measure of visibility into the quality of the application software being developed.
Personally, combining Rothman’s ideas with structural analysis sounds like a good idea…technically speaking.
 

Great Expectations and How to Meet Them

There’s a very old mantra around project quality that says, “If you want something done right, do it yourself.”
I disagree.
We recently remodeled the bathroom in our master bedroom. Rather than taking my own sledgehammer to the walls, tub and toilet and then hanging my own sheet rock, my wife and I hired a local contractor who came in, did the demolition and reconstruction, and in the end we wound up with a room with which we’re very happy.
I can tell you without reservation that had I done it myself the project would have turned out disastrous because I confess to a certain measure of incompetence when it comes to carpentry…and plumbing…and electrical systems…and just about every other discipline that goes into rebuilding a bathroom.
I guess you could say we had “great expectations” and knew that to achieve them we needed to find someone else to do the job.
Loosing Control
I suspect that this lack of capability in terms of doing something yourself does not always extend to companies when they choose to outsource software builds, but there is some measure of it. The decision to outsource usually comes down to one of two reasons – a company doesn’t have the time to do it or feels an outside group can do it better.
This decision to outsource is being made by an increasingly large segment of the business community. As was recently noted on The Outsourcing Blog, “the public and private sectors alike are becoming increasingly reliant on third-party suppliers to effectively operate.”
What is a bit off-putting, however, is the claim made in that post that, “that some 64% of third-parties fail to meet stakeholder expectations and contractual commitments, according to recent research we have undertaken.”
The fact of the matter is, regardless of where a company chooses to outsource, there is a certain relinquishment of control. It is simply neither possible, nor desirable to hold tightly to the reins of all aspects of an outsourced project. When the outsourced project has an offshored element, the potential increase in benefits is met with an equivalent set of risks. Cultural differences and distance alone significantly contribute to increasing both the risks and management costs.
Much of this can be attributed to the fact that organizations have not previously had the means to assess application software quality in real-time when its development has been outsourced. QA plan compliance checks, while useful in some capacities, are normally performed via random manual code reviews and inspections by QA staff. For a typical one million-line-of-code J2EE distributed application, there is significant risk that key issues will go overlooked. Furthermore, standard functional and technical acceptance testing is simply insufficient at detecting severe coding defects that may have impact on the reliability and maintainability of an application. Finally, in the current geopolitical context, programming vulnerabilities, or even hazardous code in a mission-critical application, could easily produce disasters in production – data corruption or losses, system downtime at crucial moments – all of which negatively affect the business operations.
Unfortunately, most IT organizations have chosen to leave the technical compliance issues aside, due to either limited resources are scarce or a lack of the required skills. Instead, they all too frequently assume that tersely worded SLAs will be enough to protect them over time. In reality, while today’s SLAs routinely include financial penalty clauses, fines and legal battles, they are not all that effective in preventing system failures.
Get it Right
In order to be successful, companies need to acquire and deploy software solutions that help manage these global partnerships by providing greater insight into the build process through real-time access to objective data. Employing a platform of automated analysis and measurement to assess the application as it is being built, for instance, affords transparency into the outsourced work, grants discipline into how information is handled and yields metrics to evaluate results.
With that kind of real-time access and information into how a company’s software is being built or customized, it won’t matter if the outsourcer is across the hall, across the street or across the ocean. You will always know just where your software is and if the outsourcer is building it efficiently and up to your high application software quality standards. Not having that kind of insight could lead to software issues that would scare the Dickens out of you!

Speed Kills

Some among us may remember Earl Scheib who owned a chain of auto painting facilities; at least, that’s what he called them. In actual fact, his shops were a national joke. In his TV commercials he would tell viewers, “I’ll paint any car for $99.95” and would promise one-day service. He did just that, but as the old saying goes, “You get what you pay for.”
All Scheib really cared about was sales and he thought the way to increase them was by promising something cheaper and faster than the competition. The low cost and speed, however, came at the cost of quality.
What Scheib seemed to miss is that the concept of “value” has two sides to its equation. Not only does something have to come at the right price and in a timely manner, it also has to provide an acceptable level of quality regardless of the price, otherwise the value suffers. And whether it’s having auto body work done, buying groceries or developing software, value is truly what the consumer seeks.
Of course, when it comes to painting cars or buying groceries, we determine value rather easily. Determining software value, however, is a much more complicated process, but I can assure you, that process goes well beyond the question of how fast you can get the code written.
Diligently Chasing Quality
While the size of embedded software increases every year, the basic tasks to develop remain the same:  editing, compiling and debugging. Magnus Unemyr of Atollic AB notes that one key to unlocking a more efficient software development process begins with creating a “well-thought-out design that is maintainable” and managing code complexity. He continues by noting that developers should think of themselves not as “code writers” exclusively, but also “software engineers,” focused on improving the efficiency and quality of the entire development value chain.
So let’s have a look at improvements in traditional development tools that can improve quality (and speed of development in the process) and then move on to other factors involved in improving software development value.

Editing Tools: Easier navigation in editing tools can result in less complexity, and less complexity often translates to better code. These should include features such as color-coded syntax and expansion/collapsing of code blocks. They should also include a smart editing capability with a configurable coding style.
Compilation Solutions: New compilation solutions should feature advanced build systems, and support application and library projects, which can be built in “managed” or “makefile” mode. Developers can also look for dual tool chains, one that addresses the embedded microcontroller device and the other that focuses on Windows-based PCs.  This approach allows PC engineers to develop utilities that share configuration data to the embedded board, or log data from embedded boards, without the need to buy Microsoft VisualStudio.
Debugging Tools: Developers should look for debugging solutions that include multi-processor debug capabilities and real-time tracing.  They might also include support for a wide range of features, such as simple and conditional code and data breakpoints, full execution control functions, memory and CPU register views and call-stack views.
Code Management: New code management solutions will ensure that as project requirements change and developers come and go on the project, robust version control is in place, making certain that in the future, developers will understand the reasons behind extending and modifying code during the project.

Code reviews require extra steps and time, but they are among the least expensive ways to improve software quality because with each phase of a software build, it takes ten times longer and costs ten times more to find errors.
Accelerating Quality through Communication

Another concept involved in maximizing software value is predictability. If you think about it, customers are less annoyed by the time it takes to complete a development project, than they are when they’re told the project is late.  Typically, this lack of predictability lies deep within the software development process and usually the result of a chain reaction, not a single event.  An activity that improves predictability at the beginning of the project will have a positive ripple effect on the entire development process.
One new development methodology that comes to mind is DevOps, which promises to speed the development process, but also has the ability to improve predictability.
The big advantage of DevOps, in my view, is its ability to tighten the communication loop between developers and operations, allowing developers to make changes more quickly. This collaboration facilitates faster changes, better updating and improved scalability and enables companies to make changes to their code on the fly rather than allowing issues to linger into succeeding stages of the build process where they can take ten-times longer to fix. Along with careful quality control processes, such as having multiple engineers review code before it goes online, DevOps can contribute to improved predictability, quality and value.
Achieving Value
While enhanced communication is nice, DevOps only works if the issues with the software being built are made visible. This is why one of the most significant tasks in the “value add” process of a software build is ongoing structural analysis and measurement.
Automated analysis and measurement solutions apply advanced diagnostics to identify and quantify structural flaws, arming developers with the information necessary to make fixes. And although automated analysis and measurement solutions may add a bit of time to the development process, they are far more efficient and far less time consuming than manual structural analysis, which is grossly inefficient.
When coupled with a well-thought-out development plan that clarifies predictability and used in concert with advanced editing, debugging and other tools, automated analysis and measurement solutions provide visibility into the structural quality of an application. Ultimately, optimized software is achieved faster, which greatly increases the long-term value of the application…not to mention spending a little time on getting it right won’t add as much time as trying to fix malfunctions, restore outages or mitigate losses of data due to breaches of vulnerable software.
So the next time you hear someone talk about how fast and cheap they will deliver their software, remember Earl Scheib and tell them you’d prefer it done right.

Toast, Coffee & Software Quality

Last week’s admissions of bugs in newly released software by Apple and Google were just the latest reminders that the battle between bringing software products to market quickly and optimizing software quality is coming to a head in a year that has seen far more than its share of software outages, malfunctions and security breaches. Most of these problems have been the direct result of problems with the structural quality of software and have cost the companies hit by them a great deal both financially and in terms of reputation.
In a case of appropriate timing, this struggle will be the topic of a three-city executive breakfast series this week titled, “The Economics of Quality Software.” Hosted by CAST, the world leader in software analysis and measurement, each of the three sessions will present how, in the face of the current regulatory requirements, intense competition and merger activity, executives can make the difficult choice between speed and quality when it comes to application development.
The sessions will feature technology industry luminary, Capers Jones, and CAST Vice President of Product Development, Olivier Bonsignour, who co-wrote the recently released book, “The Economics of Software Quality”. The presenters will offer practical, data-driven methods for helping executives make the right tradeoffs between delivery speed, business risk, and technical debt – the cost of fixing problems which, if left unfixed, can lead to outages and data breaches.
The breakfast tour opens tomorrow, Tuesday (November 8th) in Atlanta, GA, continues in Charlotte, NC, on Wednesday (November 9th) and concludes in Dallas, TX, on Thursday (November 10th). To register for any of these events or for more information about them, visit the CAST Events page.