I was watching the gymnastics competition at the Olympics on Sunday night and on more than one occasion heard commentators applaud competitors for their agility. As I watched these gymnasts move swiftly and with exacting precision across the beam, floor, vault and bars, I could not help but marvel at their abilities and at how appropriate a descriptor “agile” was for them.
Long before businesses tossed around the term “Agile” as a method of technology project management, it stood as a word that often affixed to people and objects that displayed a certain set of characteristics. People earning the moniker “agile” almost invariably were both fast and nimble – not just one or the other, but both. These people were quick, clean and to the point. They operated with not only a swiftness, but also a high degree of accuracy.
Jack Be Nimble There’s no question that when business enables agile to be both quick and nimble, it can succeed and limit a company’s technical debt to a workable levels. As Microsoft’s Bryan Group points out on the Microsoft Developer Network’s ALM and Beyond Blog, “Some may say being Agile will always sustain TD but I disagree. If you look at Scrum’s three pillars: transparency, inspection, and adaptation it’s clear that abiding by these principles will help ease or for that matter eliminate TD.”
Notice, Group’s focus is on the characteristics of Scrum that deal with achieving accuracy in project management – transparency, inspection and adaptation. What all three of these elements have in common is the taking of time to communicate during the build process and to work quickly and effectively toward a definitive goal. It’s a speed and nimbleness that does agile development proud.
As an avowed Scrum advocate, Group proudly proclaims:
“Beating the Scrum drum even harder, if used properly it’s such an elegant solution to the TD problem. As mentioned, all the members of the project team have visibility into one another so if by some reason a Sprint occurs and TD rears its ugly head, it will most certainly be shown in the Sprint Review and/or Retrospective phase.”
Jack Be Quick
Unfortunately, Marketing Departments today too often hear “agile” and think fast…and only fast. Pushing a product to market before its time, though, almost always has disastrous results. Witness all the instances last year of when it appeared Marketing favored speed over quality. There was Apple admitting just one week after the release of the iOS 5 operating system that a bug in the system caused a major battery drain. That same week, Google pulled its iOS Gmail app from iPhone App Stores due to a bug that causes a “notification error” or any of a myriad of others.
These decisions to place time to market ahead of quality cost businesses money and increase their technical debt. Not only that, such thinking is short sighted.
As Group notes:
“…say that TD is a necessary evil when entering an emerging market because the ability to be first is paramount. Meaning: ship a v1 product at all costs (read: quick and dirty) and worry about fixing any defects, feature enhancements and other items in v2. While being first may help you garner attention and potential market share, if the application quality suffers then you’ll never have the chance to produce a v2 product and your competitor who understood the ramifications of TD will reign supreme. Not to mention if you’re a consulting company who continually does not meet milestones and/or creates an inferior product, the contract with that customer may not be renewed.”
Agile done right – that is agile development that is allowed to be both quick AND accurate – can be highly successful. Agile that just tries to be quick comes up short and whether it’s a gymnast going over a vault or Jack going over his fabled candlestick, coming up short is a painful prospect.
I have been an East-Coaster all my life. I’ve lived, worked and even attended college in states that all lie East of the Mississippi. However, throughout my 18 years working in the technology business, my clients have been spread out around the U.S. and abroad. I’ve found myself doing phone calls before the sun rises and well after it has set. That’s just the way it is in this business.
While it is admittedly easier to write about companies that are located in another state, I the remote worker hardly begins and ends with us writers. More often than not I’m working with companies that have developers, architects, managers, directors and even executives spread out over multiple locations. One Canadian-based division of a company I represented showed a real sense of panache and even went so far as to build a robot with an embedded web cam to allow one of its developers to move to another province while still maintaining a “physical” presence in the office.
But the truth of the matter is that communication between people not within a single office is still problematic. You just cannot possibly be as free and open via email or even over the phone when it comes to scheduling meetings and sharing ideas.
As Johanna Rothman points out in her Managing Product Development blog:
“Let’s assume you have what I see in my consulting practice: an architecture group in one location, or an architect in one location and architects around the world; developers and “their” testers in multiple time zones; product owners separated from their developers and testers. The program is called agile because the program is working in iterations. And, because it’s a program, the software pre-existed the existence of the agile transition in the organization, so you have legacy technical debt up the wazoo (the technical term). What do you do?”
It’s Personal…and It Isn’t Being the good consultant and managerial-type she is, Rothman offers a number of ways to mitigate issues between dislocated teams of developers. Her ideas are solid and very much in tune with promoting a good work environment for developers. These ideas encompass many good practices. She suggests establishing teams so that developers can’t be borrowed between offices and ensuring that each team has a specific task or feature on which they always work. She says to establish specific goals for each team and espouses the need to take extra time to map out what needs to be tested in each iteration so that teams in different time zones are not inconvenienced by conference calls outside their zone’s normal work hours.
All of these have great potential to make teams work together better…I’m not sure they solve the problem of the structural quality of the software being developed, though. The problem there is similar to something I learned in my eighth grade Chemistry class. There I learned that every vessel you use to measure something has a certain amount of inaccuracy to it. Therefore, when you use multiple vessels to measure something, you increase the level of inaccuracy for that measurement.
Similarly, when you split up a project, each team yields a certain level of control to its counterparts in other offices and geographic differences – whether teams are split between areas of the U.S. or other countries – play into what goes into the software and how code is written. Those are issues that cannot be resolved easily when you bring the various portions of a project together. However, if an organization implements an across-the-board requirement that all teams perform some form of structural analysis throughout the process, it can potential technical debt as well as provide a measure of visibility into the quality of the application software being developed.
Personally, combining Rothman’s ideas with structural analysis sounds like a good idea…technically speaking.
There’s a very old mantra around project quality that says, “If you want something done right, do it yourself.”
I disagree. We recently remodeled the bathroom in our master bedroom. Rather than taking my own sledgehammer to the walls, tub and toilet and then hanging my own sheet rock, my wife and I hired a local contractor who came in, did the demolition and reconstruction, and in the end we wound up with a room with which we’re very happy.
I can tell you without reservation that had I done it myself the project would have turned out disastrous because I confess to a certain measure of incompetence when it comes to carpentry…and plumbing…and electrical systems…and just about every other discipline that goes into rebuilding a bathroom.
I guess you could say we had “great expectations” and knew that to achieve them we needed to find someone else to do the job.
I suspect that this lack of capability in terms of doing something yourself does not always extend to companies when they choose to outsource software builds, but there is some measure of it. The decision to outsource usually comes down to one of two reasons – a company doesn’t have the time to do it or feels an outside group can do it better.
This decision to outsource is being made by an increasingly large segment of the business community. As was recently noted on The Outsourcing Blog, “the public and private sectors alike are becoming increasingly reliant on third-party suppliers to effectively operate.” What is a bit off-putting, however, is the claim made in that post that, “that some 64% of third-parties fail to meet stakeholder expectations and contractual commitments, according to recent research we have undertaken.”
The fact of the matter is, regardless of where a company chooses to outsource, there is a certain relinquishment of control. It is simply neither possible, nor desirable to hold tightly to the reins of all aspects of an outsourced project. When the outsourced project has an offshored element, the potential increase in benefits is met with an equivalent set of risks. Cultural differences and distance alone significantly contribute to increasing both the risks and management costs.
Much of this can be attributed to the fact that organizations have not previously had the means to assess application software quality in real-time when its development has been outsourced. QA plan compliance checks, while useful in some capacities, are normally performed via random manual code reviews and inspections by QA staff. For a typical one million-line-of-code J2EE distributed application, there is significant risk that key issues will go overlooked. Furthermore, standard functional and technical acceptance testing is simply insufficient at detecting severe coding defects that may have impact on the reliability and maintainability of an application. Finally, in the current geopolitical context, programming vulnerabilities, or even hazardous code in a mission-critical application, could easily produce disasters in production – data corruption or losses, system downtime at crucial moments – all of which negatively affect the business operations.
Unfortunately, most IT organizations have chosen to leave the technical compliance issues aside, due to either limited resources are scarce or a lack of the required skills. Instead, they all too frequently assume that tersely worded SLAs will be enough to protect them over time. In reality, while today’s SLAs routinely include financial penalty clauses, fines and legal battles, they are not all that effective in preventing system failures.
Get it Right
In order to be successful, companies need to acquire and deploy software solutions that help manage these global partnerships by providing greater insight into the build process through real-time access to objective data. Employing a platform of automated analysis and measurement to assess the application as it is being built, for instance, affords transparency into the outsourced work, grants discipline into how information is handled and yields metrics to evaluate results.
With that kind of real-time access and information into how a company’s software is being built or customized, it won’t matter if the outsourcer is across the hall, across the street or across the ocean. You will always know just where your software is and if the outsourcer is building it efficiently and up to your high application software quality standards. Not having that kind of insight could lead to software issues that would scare the Dickens out of you!
In 1807, French playwright Charles-Guillaume Étienne penned the famous line, “On n’est jamais si bien servi que par soi-même.”
For those who do not speak French, you may recognize this now idiomatic phrase as the oft uttered, “If you want something done right, do it yourself.”
Étienne’s words are a proclamation of self-reliance commensurate with the attitude of the French Revolutionary period during which he earned his acclaim; however, they are quite obviously not a hard and fast rule among businesses today. In today’s world, many companies that want “something done right” – including the development of software – look overseas for other companies to do it right for them.
Historically, outsourcing projects have been viewed as a difficult and even tenuous proposition. Variances in how work is conducted and language differences (like the one eluded to above) are seen as things that need to be overcome in order to make outsourcing work…and often the reason for making it work is simply that it costs less to do it overseas.
There are those who question whether it truly does cost less having it done overseas if these problems must be overcome or at least worked around. Others, however, perform their due diligence, not only before deciding to outsource, but also during and after the project has been done.
Ask the Questions
The decision of whether or not to outsource comes down to one question, “Can another company do it more efficiently than your own?” Curt Finch, CEO of payroll automation company Journyx, offers some advice on the Executive Street blog about how to make the decision of whether or not to outsource software development. He makes it sound very simple:
Answer four questions:
1. How much?
2. How long?
3. How risky?
4. How strategic?
The questions of “how much” and “how long” are rather straight-forward and objective – either the potential outsourcer can do it cheaper and quicker or they can’t. Even “how risky” under Finch’s definition – that being how solid the company is and how it is viewed by previous customers – comes down to figures (Finch points to stock price) and real, albeit anecdotal data.
Unlike the first three questions, however, “How strategic?” is a far more subjective question to answer. By Finch’s admission, this question raises more questions:
“What will your IT shop learn from building this application in-house? Is this knowledge coherent with your company’s core business strategy? Will the education your team gains from this exercise lead to improved capability for your company’s business, or is it detracting from more appropriate knowledge?”
He admits these are hard questions to answer…or are they?
Get the Answers
Most people believe that outsourcing is akin to off-loading and if that is the plan of the company shopping the project for outsourcing then the chances are pretty good the project will turn out pretty bad. Taking a hands-off approach to managing an offshore outsourcing project – by relying on SLAs, for example – and expecting a high-quality output is not only unrealistic, it’s also unfair. Rather, close management — or, even better, increased visibility into the project using application software structural analysis — is critical to achieve the desired result. To achieve the necessary visibility into the project and in so doing also achieve the strategic value sought in outsourcing the project, a company should consider implementing a platform of automated analysis and measurement to perform strategic structural analysis at each stage of the build.
The next-best thing to hands-on management, structural analysis provides the visibility critical to catching code imperfections in preproduction phase, before the application is deployed and causes costly and inconvenient outages or compromises security. With this hands-on approach to outsourcing, companies can realistically expect performance equal to what they can produce in-house.
That kind of visibility makes the most difficult of the outsourcing questions much easier to answer because the development of software is neither out of sight, nor out of mind, but rather it is simply software done right!
This blog has long professed the need for businesses to analyze, measure and assess their IT application portfolios to identify those issues with application software that cause a whole spate of headaches, from application failure, to business risk to increased technical debt.
To this end, Large enterprises have been able to apply an automated solution to perform this assessment has meant installing a comprehensive platform that analyzes and measures existing applications and can also scrutinize application software during the build or customization processes to catch issues as they happen. However, smaller businesses and individual developers generally do not have access to these large platforms and even some larger enterprises have not yet availed themselves of the technology available to them. As a result, those who have yet to adopt automated analysis and measurement platforms find themselves having to do the best they can with manual assessment of their application software, which is grossly inefficient.
Today, however, CAST released a solution for those who do not have access to comprehensive application analysis platforms with the launch of Rapid Portfolio Analysis (RPA).
RPA is a cloud-based SaaS offering that proactively identifies poorly performing applications. It provides an affordable, on-demand alternative to expensive enterprise application portfolio management or project portfolio management solutions, while automatically providing feedback on software health. The information provided by RPA arms executives with the information to determine which applications need further investigation or closer ongoing monitoring for structural issues. With companies spending as much as 70% of their IT budgets on maintenance and support costs, RPA can provide a quick overview of the things that could be ailing a company’s IT portfolio. By identifying those problem areas and then fixing them, CIOs could save their companies significantly by reducing or even eliminating business risk (i.e., security breaches or outages and downtime due to application failure) and excess technical debt, which the CAST Report on Application Software Health (CRASH) identified back in December as reaching an average of over $3.6 million.
At a time where world economics dictate that companies do all they can on trimmed budgets, being able to identify problem areas quickly and relatively inexpensively would certainly allow CIOs to focus more of their budgets on innovation rather than remediation.
So when it comes to identifying problems in IT applications, it seems like “clouding” the issue might actually clear it up.
Last week, Capgemini released its second Financial Services World Quality Report. The report cited that while corporations across the globe continue to be constrained by budget issues, the complexity and volume of application software they handle continues to increase exponentially. As a result, Quality Assurance organizations are turning more and more to the cloud and outsourcing as strategies to achieve quality applications, while attaining optimal business value.
When it comes to outsourcing in particular, the report states, “With a more comprehensive outsourcing strategy, firms can derive value from transforming business processes, improving time to market, capturing operational efficiencies and further optimizing costs.”
As firms continue to increase their outsourcing efforts in search of business value, they should also be looking to increase the quality of the products produced by those outsourcers. Moreover, the IT services companies taking on outsourcing projects should be taking steps to assure the structural quality of the products they produce for their clients.
Quality is Job 1
The impetus to achieve greater structural quality was undoubtedly at the core of a recent decision by Mahindra Satyam, a leading global consulting and IT services provider, which this week announced the launch of Structural Testing Analysis & Measurement of Projects (STAMP). With nearly 30% of production defects due to structural quality, STAMP will help its clients’ application owners identify structural issues before they reach production, and will provide insight for implementing corrections before application software is deployed, thereby reducing overall cost of correction.
At the heart of the unique spotlight into the unexplored structural quality of the most business critical applications is CAST’s platform of automated analysis and measurement. CAST’s platform powers STAMP by analyzing structural quality of the application stack, which enables Mahindra Satyam to deliver higher performance, greater reliability and increased security, while also reducing underlying technical debt.
In its recent announcement of his company’s partnership with CAST, GS Raju, global head-testing services for Mahindra Satyam notes that having the ability to weed out issues of structural quality during pre-production is a market differentiator for his company. He says, “We see great opportunities in upscaling the value to our existing clients and prospects and elevating existing niche testing services by rolling out STAMP.”
My father was proud of his military service. He believed that young men and women could learn a lot not only from having served in the armed forces, but from having actually experienced the stress that comes with “taking fire.”
He was not one of those war mongering types who believed everybody should experience combat. He did say, though, that even navigating what is known as “the infiltration course” – the part of training where you crawl under barbed wire while live bullets fly over your head – is something that teaches you to stay calm when all hell is breaking loose around you.
Sounds like the kind of training that would help many development teams when dealing with technical debt.
Grace Under Fire
The Agile Architect, Mark J. Balbes, pointed out in a recent post over on Application Development Trends some of the less obvious effects of technical debt.
He describes a well-oiled development team that falls apart after failing to practice several of the critical software disciplines needed to create high quality, agile software. At first, they didn’t realize the cause of the strife on their team, and then it appeared – technical debt. Technical debt manifests itself in many ways. If you put off fixing bugs or pass on automated testing – it appears. If you delay refactoring code until after a release – there it is. And then what happens? Customers start calling you about bugs, development teams take longer to build new features because you didn’t refactor. That short-cut is now costing you big-time. And there is a development team cost, too. As the team works around problems, easy tasks become more complicated. When one developer slows down, he or she forces the others to slow down too. The team gets frustrated and harried…and then the finger pointing starts.
In The Agile Architect’s example, his team inherited code with no automated tests, no manual tests, no documentation, dead code, spaghetti code and bugs that were all undocumented. The team was asked to have all of this fixed in two months, so the team got to it. Two developers decoupled the code into three major sections. The team then wrapped some sections with automated tests and refactored. Other sections they completely rewrote. They fixed bugs as they found them. By attacking the problems in pieces, they were able to ship improved versions at any time.
Taking a “divide and conquer” approach created an interesting result. Over time, software that was buggy earned a lot of attention and testing, so that at the end, the only software not tested was the original code that the team knew was stable. While the result here was positive, the accumulated technical debt created a crisis that was unnecessary. The existence of technical debt isn’t necessarily a problem; not having a handle on the amount of debt and how to control it is. Not many organizations keep close tabs on their levels of technical debt, or if they do, they measure the wrong things, such as stable legacy systems.
Prior Planning Prevents Poor Performance
Projects should start with developers creating a comprehensive plan that identifies the final expected operational requirements of the software. The plan should include the overall architecture, capabilities of the main components, scaling factors and interoperability standards. Provided it is factored in as the means to complete the project, an agile development approach can then be used to deliver rules, interfaces and functional outcomes. It’s often effective to complete work in a series of two-week “sprints”; an error in approach then lasts a maximum of two weeks and the team can “fail fast and recovery quickly.” Any rework that’s required is focused on two week’s of work, not an entire project. Despite the best-laid plans, too much technical debt sometimes still manages to accrue, but forging a strong culture can help mitigate the negative effects of technical debt and keep a talented team together. Being honest about how agile the development process really can be and knowing there is top-level support for the project contributes greatly to building a strong culture, as do open communications and incentives to disclose technical debt when it appears.
Of course, the best approach to building a strong culture is to hire strong people. There is a steep learning curve in agile development and managers should expect a high failure rate – developers who really understand agile are few and far between.
Measuring and analyzing the quality and complexity of projects through automated solutions will also contribute greatly to reducing technical debt and should be incorporated into the planning and development process.
Strong people implementing well thought out processes and measuring and analyzing code throughout development while working in a positive, communicative culture have the best chance to succeed and are less likely to have to battle significant technical debt, let alone back down “under fire.”