Software Risk Driven Application Development

Understanding Software Risks Created by Poor Application Development and Release Practices
While the conditions that drive software project managers, development teams and their leadership are often in the best interest of the company, they sometimes fail to recognize the software risks introduced to the business by these decisions or behaviors.  A review of the latest software risks affecting businesses illustrates that development organizations are notoriously poor at managing software development processes such as releases and evolutions.

Moving your application to the cloud: Getting ready!

When we start talking about cloud, several common questions come to mind:
What do you mean by “cloud”?
What standard requirements need to be fulfilled before moving to the cloud?
Is my data secure on the cloud?
What about application quality?
Is it easy to push my application on the cloud?
I will be examining these questions and their answers in a series of posts around cloud.
The original goal for the cloud was to reduce the cost of IT infrastructure by allowing customers to utilize an infrastructure managed by a third party that contains physical and virtual machines, disk space for storage, and other resources remotely. This type of service model is termed Infrastructure as a Service (IaaS).
The cloud advanced to the next level by offering more than just the hardware. Cloud vendors started to offer the complete environment (for development and production) including the operating system, programming platform, databases, and web servers for hosting ‘N’ applications. This offering is called Platform as a Service (PaaS).
Today, we hear almost every article or blog referring to Software as a Service (SaaS) as the most popular cloud offering. In SaaS, the cloud vendors will install, host, and manage your software application on the cloud, and the end-users (referred as cloud users) can gain access to their application using specific cloud clients.
Moving to the cloud has its own risks and benefits like every new technology or innovation that is on the market. However, it will be interesting to understand what it takes for an application to be considered cloud-ready.
Based on my experience, I have put together the seven most crucial requirements for a cloud compatible application.

Multi-tenant architecture. Refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple client organizations (tenants). This is achieved by ensuring a unique key for referring or accessing any record in the database. Everything is linked to this key, thus helping the cloud user have their own view of the intended application or service.
Sign in/sign out. Apart from the legal bindings, there has to be a definite workflow with an appropriate level of authentication for every user to securely sign in and sign out of the cloud environment or application, as we are uploading the user data into the cloud environment. The same holds true for signing off so that we do not store the active data of the user once they sign off.
Logging. Another important requirement is the ability to constantly monitor and log every action, transaction, and task performed in the cloud environment (mainly databases and application servers) by the user. There is also a need to have a load balancing and fail-over system in place in case we are dealing with crucial  applications.
Easy maintenance. The application must allow an easy and quick way to backup and restore the data in case of a crash or corruption. And even that backup plan needs a good maintenance routine to perform these cleaning tasks (including logs and disk space cleanup) for our application.
Hosting your application. If we plan to host our application on our own private cloud, we need a well-designed cloud landscape with the relevant servers, load balancers, firewalls, and proxy servers in place. Unless we decide to host our application on a public cloud, like Amazon, then we need not worry about infrastructure design details.
Data privacy and security. Assuming that the user will enter his data into our cloud environment, we need to ensure data privacy by fulfilling major security requirements around SQL injection, cross-site scripting, architecture, logins, and passwords to ensure that the user’s data is secured.
Data transfers. Whenever we upload or download data from the cloud, in the form of reports or other flat file exchanges, we must ensure the use of standard encoding or encryption techniques and perform these operations via secured FTP or HTTPS connections.

The standard requirements around usability, scalability, and performance also apply depending on the business needs or type of application.
If you plan to push your application or product from your local environment to a cloud infrastructure, you will need to fulfill the above requirements for your respective application. Please remember that these requirements will also have an impact on the quality of your application.
Do you think your application has what it takes to move to the cloud? Share your thoughts in a comment below.

Getting SaaS-y about Technical Debt

I came across that old phrase, “Why buy the cow when you can get the milk for free?” the other day, in the context of marriage.  Why should people marry when they can just live together?  Well, you can imagine I came across a lot of opinions I won’t go into here.
An article by Naomi Bloom popped up, using the phrase in a technology context. She noted that vendors of traditional licensed/on-premise enterprise software had served themselves very well by forcing customers to buy both the apps as well as owning the data center and its operations, application management, upgrades, human resources, and more. This has provided traditional vendors considerable financial security and market power.
Clearly Defining the Cloud
Today’s multiple forms of cloud computing are changing all that, but we need to be careful of what passes for cloud computing, especially SaaS. Software marketers are rebranding all their products as SaaS, whether they really are or not, to take advantage of the latest ‘buzzword.’ Bloom notes that “true” SaaS must include four characteristics:

Software is made available to customers by the vendor on a subscription model;
The vendor hosts, operates, upgrades, manages and provides security for the software and all data;
The software architecture is multi-tenant, has a single code base and data structures, including metadata structures that are shared by all customers; and
The vendor pushes out new releases on a regular basis that are functionally rich and opt in.

Keep in mind that software can meet all these attributes and be “true” SaaS, but still be badly written, unprofitable, outdated or problematic in other ways. However, when well-architected, well-written and well-managed, true SaaS can provide many benefits, including improved economics, faster time-to-market, more frequent and lower-cost upgrades, greater value-added and/or new offerings, and improved agility.

SaaS Doesn’t Eliminate Technical Debt
One quality even true SaaS shares with traditional on-premise software is technical debt. Another benefit of the SaaS model not listed above is the continuous review of the software by multiple users, which can clue-in the vendor to issues with the code that impact performance.
There’s also a new generation of cloud-based portfolio risk analysis solutions that quantify the size and structural quality of applications, evaluate technical debt and offer information and insights that support investment decision-making. These solutions can provide continuous monitoring of the SaaS solution as well as risk and application analysis. Then, the vendor can implement a risk-based APM strategy faster, enabling better and safer decisions, portfolio modernization and transformation. It also profiles structural quality risk, complexity and size of any application to identify unexpected risks. Finally, it quantifies technical debt to proactively prevent software cost and risks from spiraling out of control.
If users think they are going to eliminate technical debt by moving to a SaaS model, their thinking is cloudy. But there are solutions to identify and help address technical debt for SaaS architectures that are just as robust as their on-premise counterparts.

Clouding the Outsourcing Issue, part 2

Don’t bother trying to reach me the next few weekends; it’s playoff time in the NFL!
I promised my wife way back when we started dating that she need not be a “football widow” every Sunday during the season. However, our relationship has spanned roughly the same time period as the unmitigated success of my New England Patriots and during that period she has come to know that my availability and attention during weekends from early January through the Super Bowl in early February are going to be dependent upon the NFL’s playoff schedule…especially when the Patriots are involved.
Fortunately, not only does she not get angry, but also she supports it, even though she dislikes sports. We’ve hosted Super Bowl parties and in preparation I happily do the “grunt work” and “food prep”, but I yield the responsibility for organizing all the other elements of the party to her. Why?
Because much like organizations making a decision to outsource work, she knows what she’s doing better than I do.
The Blind Side
Back in September in “Clouding the Outsourcing Issue, part 1,” I likened the decision to outsource to a “Hail Mary” pass in football because much like this desperation pass, companies often view the relinquishment of control over a project when outsourcing as a “hope and a prayer” type of approach. But sometimes there’s no other option. In-house teams can’t keep up with the latest developments in hardware, software, network architecture and the like; meanwhile outsourced teams are more focused on making money than on providing solutions and service.
This already confusing argument is now being further convoluted by a new option: cloud computing.
Charlie Babcock, in a recent InformationWeek article calls cloud vs. outsourcing, “The Next Battleground.” Naysayers will claim that cloud technology isn’t mature enough to enter this debate, but I believe with the use of powerful automated analysis and measurement solutions, the visibility IT teams gain provides the control necessary to manage a cloud-based network effectively. It’s just this kind of control that companies fear they lack when they outsource a project, so perhaps the cloud would provide them with the kind of “close to home” comfort level they seek.
A November 2011 PricewaterhouseCoopers survey of 489 IT executives, cited by Babcock, reports 77 percent of surveyed companies have started or have plans for some form of cloud computing and 64 percent said the cloud will be the “best way” to manage infrastructure three years from now.
Babcock adds that IBM, HP and others, who have deep connections with companies based on longstanding outsourcing relationships, do not necessarily have a leg up on Amazon, Rackspace and other cloud infrastructure providers. Outsourcers that add cloud services may keep the customer, but they will lose some of their profits, since racks of commodity services will replace the highly-specialized IT services they previously provided.
There’s lots more bad news for traditional outsourcers: 55 percent of those surveyed believe private cloud service providers will be best equipped to provide cloud infrastructure three years from now, versus 39 percent who believe traditional outsourcing companies will be. And, even among companies currently working with traditional outsourcers, 52 percent said providers with a cloud focus will be the best infrastructure partners in the future.
Respondents believe cloud-only service providers will be able to combine the managed infrastructure of an outsourcer, with customers becoming responsible for managing workloads. In this scenario, if the customer instructs the service provider to run a series of applications at a given day and time, they can be certain it will happen, barring some type of service interruption.
Babcock writes that private cloud services are available both on-premises or over the public cloud.  When offered through the public cloud, the services typically include hardware isolation from other customers’ content, encrypted communications over a VLAN or secure line, and secure data handling procedures.
The Longest Yard
John Chapas II, an attorney with Reed Smith LLP, offers some related opinions in a Boardmember.com article earlier this year.  He comments that both outsourcing and cloud computing strategies come with potential security issues around sensitive information.
When maintained in-house, IT teams are able to easily determine if data is protected at a level required by the company and the company has control over modifying data security and completing upgrades. This level of control is especially important where there are legal requirements surrounding data security and breach disclosure. These legal requirements focus on sensitive information, such as nonpublic personal information (e.g., Social Security numbers and credit card numbers). He adds that many states now have legally required remedies for data security breaches, such as written notifications to affected individuals. For an extensive breach, these notifications are not only time-consuming, they are also terribly expensive.
When a company outsources or employs a cloud infrastructure, the company no longer maintains control of security. This loss of control can be problematic since the company is still responsible for protecting the data and the same remedies apply if there is a breach.
Company IT and legal teams should require either the outsourcing or cloud service provider to possess and maintain security measures that meet the company’s legal and fiduciary responsibilities. This can include all requirements for data security such as encryption, vulnerability testing, audits, passwords, firewalls, et al.
Whether it’s cloud or outsourcing, company managers often rely on the vendor’s reputation, when they should be scrutinizing the contract to ensure the vendor meets data security requirements.  With appropriate verbiage in the agreement, the vendor might initially meet the company’s date security requirements at the outset, but then make changes to security policies that would put the company out of compliance.
All the Right Moves
It’s pretty clear the moving infrastructure to the cloud (IaaS) is here to stay and will be an increasingly viable option for company IT teams.  Regardless of whether data is maintained in house, outsourced or managed in the cloud, an automated solution to analyze and measure software quality should be an essential part of the solution.
When company IT teams are considering either outsourcing or cloud-based strategies, they should require vendors to include ongoing software analysis and measurement as part of the offering, and this requirement should be written into the contract.  Structural analysis and measurement solutions are as important to business continuity as firewalls,  antivirus software and robust storage solutions.
Well, time for me to get on my “game face” and decide which team I want to see the Patriots beat on January 14. Meanwhile IT teams should be getting on their own game faces because when it comes to managing and protecting their company’s critical information assets effectively, whether it’s done in-house or via an outsourced or cloud solution, every day is the playoffs.

Sibling Rivalry: Code Quality & Open Source

Why does “Free” always seem to have a catch to it?
We know there’s “no such thing as a free lunch,” that “freedom isn’t free” and that if you get something for free, you probably got what you paid for. Even in the tech industry, when we talk about open source software, we immediately think “free”, yet instantly jump to the old caveat of “think free speech, not free beer,” the idea there being that open source is the layer-by-layer developed product of well-intentioned developers seeking to produce high quality software that competes with established applications.
Lately, though, there are some in the industry who are questioning whether or not open source software has lost sight of its mission to produce applications that meet high software quality standards. As Willie Faler recently posted over at DZone:
It seems by the time an open source project has reached some level of mass adoption or awareness, most of the time the projects codebase will have degraded into such a poor state as to where it is completely stagnant, or even worse, unmaintainable.
Could open source contributors be taking advantage of their “free speech” or are they writing code after too much “free beer?”
Butting Heads

While Faler admits he has no definitive answer to his dilemma, he offers a list of possible reasons why, in his estimation, open source has taken a nosedive quality-wise recently. He submits the following theories for this downward spiral:
 

Lack of discipline from core developers in enforcing good practices.
Demands for backward compatibility locking in frameworks into poor, inflexible API’s, unable to refactor away poor past decisions.
Tendency among core developers to let in poor code contributions to appease community members.
Over eagerness to constantly add (sometimes unneeded) features and chase higher version numbers, rather than “sharpen the saw” and improve the core codebase and feature-set.
Just naturally occurring entropy over time and no one dealing with it?

While we would like to believe these things do not happen, we would be fools to ignore them. Complacency, appeasement and entropy are all plagues from which businesses suffer, why not open source, too?
Why? Because it can’t afford to.
Embracing the Relationship

In some fashion, open source really needs to return to its roots where a community of software developers added to and improved upon what already existed. But just as we know “nothing is free,” we also know “you can’t go back.” Unfortunately, that leaves us in a precarious position of trying to identify the issues with open source software.
Sadly, this brings up the old “needle in a haystack” analogy because the average open source application contains roughly 450,000 lines of code (slightly more than your out-of-the-box apps). Trying to find the 100 or so offending lines of code by hand would not only be grossly inefficient, but if you subscribe to Faler’s theories of complacency, appeasement and entropy, it’s unlikely anybody would take the time to bother finding them.
So finding the offending code in that many lines would require some form of automated analysis and measurement platform, but there again lies a problem. Most platforms of this sort are available only to large corporations with generous IT budgets, so its use runs contrary to the idea of “free” no matter how you define it for open source – be it free as in low-cost or free as in free from the encumbrances of “the establishment.”
But this doesn’t mean there is no way to perform a thorough structural analysis of open source software. We are beginning to see structural analysis being made available as a SaaS offering via the cloud. Such a web-based, low-cost (with TCO running at around pennies per line of code) solution would surely agree with open source sensibilities while providing the visibility needed to identify and correct structural issues with the software.
Employing QaaS – “Quality as a Service” – could help re-unite open source with its software quality brother.

Clouding the Outsourcing Issue

Back in August, “CIO Zone” posted a blog outlining the top five cloud computing trends. Smack-dab in the middle of the top five was this one: “Custom cloud computing services,” which delved into how outsourced IT organizations must focus on automated software and become experts in migrating to SaaS, PaaS and IaaS in order to ensure the least painful cloud migrations. It brought to mind how, in an effort to save money, so many businesses blindly hand over their whatever-it-is-to-be-done to outsourcers and hope for the best.
All you football fans might recognize this as a “Hail Mary.”
The “Hail Mary” Doesn’t Work in Outsourcing
In football, the Hail Mary has only a 5 to 10 percent chance of success, but desperate times call for desperate measures, so teams go for it. In business, however, lobbing one up and hoping for the best has no place, especially as it applies to outsourcing software builds. Relinquishing control over a software build can almost certainly yield quality degradation or compromised structural quality that promise a world of hurt in the form of technical debt or lack of security.
And, to be clear, quality problems are not necessarily due to sub-par outsourcers who couldn’t care less about building a quality application. Communication issues or cultural differences — as in the case of overseas outsourcing — can play a big part in compromising quality. Fortunately, static analysis can catch code imperfections before applications are deployed.
But the benefits of static analysis go way beyond just catching a bad line of code. They also grant greater visibility into how the software is built — from soup to nuts. For example, static analysis can provide insight into whether an application is being complicated with 100 lines of code when one would suffice. It sheds light on whether the outsourcer’s code includes repetitive processes, or if he is “coding in circles” (i.e., incorporating a process, negating it, then coding it back in). This ability to focus on the structural quality of the application as it is being built practically guarantees the application’s overall health, which encompasses not only performance and security, but also ease of customization and transferability (for further upgrades or customization). In fact, think of this visibility as micro-managing a build project without having to be on site.
On a Clear Day…You Can See Forever
It’s clear that greater visibility into an outsourced software project is critical to quash quality issues, but it can also ensure a project will be delivered on time and within budget. For example, when an outsourcer builds in extra, unneeded code, he drags out the project and its cost. More visibility into the build enables the supervision required to curb these unnecessary costs.
The visibility into the process is no doubt a benefit to companies, but lest it seem like a yoke around the outsourcers neck,  the type of scrutiny that comes with static analysis tools can be a value-add for the outsourcer too. Greater visibility into the application as it’s being built will very likely streamline the process, ensure a flawless application and result in a very satisfactory product…which in turn could mean more business for the outsourcer in the future.
Structural quality of applications doesn’t have to be a casualty of outsourcing. When increased visibility into the project ensures cost and quality expectations are met, we can leave the Hail Marys to the football field.

CAST Highlight Gives Enterprises a Kick in the Apps

System outages, software failures, security breaches and IT maintenance costs are all rapidly on the rise. It seems like not a day goes by that we don’t read about one company or another announcing that their system went down or revealed personal data to hackers. Couple that with published estimates of technical debt at a half-billion dollars globally and $1 million per company and you see that things are getting out of hand. The sad part about it is it doesn’t have to be that way.
Most of these outages, breaches and rising maintenance costs can be traced back to structural issues in the application software. Were enterprises to perform some measure of due diligence to find and shore up these structural flaws, their occurrences could conceivably drop significantly. For many of these enterprises, though, the question that has plagued them has been one of trying to justify spending the money to find the 100 lines of code in the average 400K-line app – that’s 0.025% or one-fourth of one-tenth of one percent – that hold the potential for some form of risk. And that’s taking for granted that IT execs know about all of the applications in their systems; many of them are “flying blind” in this respect.
As of today, there is a better way!
This morning, CAST launched CAST Highlight, the industry’s first comprehensive, cloud-based SaaS solution that rapidly provides IT leaders with a snapshot of any or all the company’s applications.  It performs automated analysis and measurement of software in seconds and does not require code to be uploaded to the cloud – averting a major security concern! It then provides feedback on software health and arms executives with the information they need to determine which applications need further investigation or closer ongoing monitoring for structural issues.
The best part is that this fast snapshot of a company’s software health — which comes is a result of CAST’s acquisition of software from the European space agency, EADS, combined with the company’s own 20 plus years of experience performing analysis and measurement of application software — will cost enterprises only a few pennies per line of code.
At that price, there’s no reason why enterprises shouldn’t give themselves a big “kick in the apps” to locate the issues that could lead to failures. Meanwhile, CIO’s who don’t perform structural analysis of their software deserve a swift kick somewhere else.