Software Risk: Executive Insights on Application Resiliency


Software risks to the business, specifically Application Resiliency, headline a recent executive roundtable hosted by CAST and sponsored by IBM Italy, ZeroUno and the Boston Consulting Group.  European IT executives from the financial services industry assembled to debate the importance of mitigating software risks to their business.

What Do Software Analytics and Your Doctor Have in Common?

As it turns out, plenty.
Recently, the U.S. government has implemented healthcare reimbursements based on the outcome of medical treatments, rather than a traditional fee-for-service approach.   These performance-based programs are designed to improve healthcare quality while lowering treatment cost.  It’s this outcomes-based approach that Fortune 500 companies are considering as a way of reducing ADM costs while improving software quality.

5 Keys to Optimizing Cost-Effectiveness of Captives

Companies seeking to reduce time to market while improving application quality, today usually choose between assigning application development projects to either in-house teams or outsourced system integrators (SI). However, the cost arbitrage of Global In-House Centers (GIC), better known in the industry as “Captives,” continues to provide advantages in cost competitiveness that cannot be overlooked.

System Level Analysis Keeps Coca-Cola Smiling Webinar Recap

Michael Furniss, Director of Software Quality Assurance and Testing COE at Coca-Cola’s Bottling Investment Group lead a discussion on how system level analysis improves dialog with application service providers. He shared his experience about how software analysis and measurement has enhanced his traditional process and tool landscape; leading to better identification of legacy SAP code vulnerabilities that can lead to performance and stability issues.  Mr. Furniss outlined how Coca Cola has deployed this solution across their global organization and how it focuses development efforts to reduce risk and total ownership cost while keeping their executive sponsors and partners happy.
Want to hear the Coca-Cola Webinar: Listen Now!!
If you would like to hear more from Coca-Cola watch this video: https://www.youtube.com/watch?v=gTg4IdO0o78

Quality is a Happy Place

I love my job!
I’ve always been an avid writer, even as a kid. So when it came to career choices my decision to enter a profession that demanded writing skills seemed like a natural fit.
I started out as a newspaper reporter, following in my father’s footsteps, but as the jobs and money there began drying up in the mid-1990’s I took my interest in Technology and made the jump to writing for high tech companies and have been happy doing this job ever since.
For many years, I served as something of a “ghost writer,” writing press releases on behalf of my employers or articles for magazines, journals, books, papers, web sites and other media, all of which were printed under the names of corporate executives. Then came the advent of “the blog.” Many years after my last newspaper byline, I finally had that gratification of seeing my own name attached to a printed piece of writing…and it felt good. I have really enjoyed blogging on the industry about which I had learned so much over the past decade and a half.
But apparently there are people out there who claim to love their jobs even more. Last month, Forbes reported that the career-search site CareerBliss had conducted a poll of the “20 Happiest Jobs” and number one on the list was software quality assurance engineer.
Shiny Happy People
Now some might be surprised at this result. After witnessing what happened last year – the near weekly reports of security breaches, glitches and outages – I’m sure there are some who would have thought this would be a profession under a great deal of stress. They might point to the security breaches and the outages that have made news not only in the Tech press, but also nationally and globally, and say that maybe they’re happy because these issues bring them job security.
As Forbes reported on the CareerBliss study, though:
“software quality assurance engineers said they are more than satisfied with the people they work with and the company they work for. They’re also fairly content with their daily tasks and bosses.”
Forbes also notes that these professionals also earn salaries of around $100K per year – give or take a couple ten thousand – which undoubtedly adds to their satisfaction.
Happy Feat
Personally, I think the reasons for questioning why software quality assurance engineers are happy should be chalked up to the “there’s one in every crowd” mentality. After all, those whose job it is to ensure that a company’s application software is structurally sound know their jobs and reputations are on the line every time something is deployed.
What they must also realize, however, is that this job is finally earning the respect it deserves and maybe, at long last, their insistence upon quality is being heard. As organizations that experience software issues – like Google, Apple, Sony, Toyota, Citi, even the Federal Government – begin to recognize the damage that can be done to their reputations, not to mention sales and security, the importance of the software quality assurance engineer becomes amplified.
Realizing this, executives on the business sides of organizations should now realize that miscues cannot be overlooked in the name of marketing. They should also recognize not only the importance of software quality, but also the need for improved tools to assess software quality. Whereas those who monitored for software quality in the past had to pour over line after line of code to find indiscretions, today’s version of that engineer can apply platforms of automated analysis and measurement that are far more sophisticated and efficient, performing thousands of permutations per second to locate discrepancies in new and even old code. These types of tools are now no longer just the luxury of large enterprises as some are now available on a “Software as a Service” (i.e., SaaS) basis via the cloud.
Even those within the development community – at least those who truly take pride in their work – recognize the need for intense analysis of application software. Rather than feeling like “Big Brother” is watching, they see the causal relationship between structural quality analysis and good application software.
Good work environment, good pay and the satisfaction of knowing you’re responsible for good software – I’d say those are elements of a good job…but I still like mine better.
 

Getting Quality to the Core of Outsourcing

Last week, Capgemini released its second Financial Services World Quality Report. The report cited that while corporations across the globe continue to be constrained by budget issues, the complexity and volume of application software they handle continues to increase exponentially. As a result, Quality Assurance organizations are turning more and more to the cloud and outsourcing as strategies to achieve quality applications, while attaining optimal business value.
When it comes to outsourcing in particular, the report states, “With a more comprehensive outsourcing strategy, firms can derive value from transforming business processes, improving time to market, capturing operational efficiencies and further optimizing costs.”
As firms continue to increase their outsourcing efforts in search of business value, they should also be looking to increase the quality of the products produced by those outsourcers. Moreover, the IT services companies taking on outsourcing projects should be taking steps to assure the structural quality of the products they produce for their clients.
Quality is Job 1
The impetus to achieve greater structural quality was undoubtedly at the core of a recent decision by Mahindra Satyam, a leading global consulting and IT services provider, which this week announced the launch of Structural Testing Analysis & Measurement of Projects (STAMP). With nearly 30% of production defects due to structural quality, STAMP will help its clients’ application owners identify structural issues before they reach production, and will provide insight for implementing corrections before application software is deployed, thereby reducing overall cost of correction.
At the heart of the unique spotlight into the unexplored structural quality of the most business critical applications is CAST’s platform of automated analysis and measurement. CAST’s platform powers STAMP by analyzing structural quality of the application stack, which enables Mahindra Satyam to deliver higher performance, greater reliability and increased security, while also reducing underlying technical debt.
In its recent announcement of his company’s partnership with CAST, GS Raju, global head-testing services for Mahindra Satyam notes that having the ability to weed out issues of structural quality during pre-production is a market differentiator for his company.  He says, “We see great opportunities in upscaling the value to our existing clients and prospects and elevating existing niche testing services by rolling out STAMP.”

Fed Should Budget for Technical Debt

It’s a presidential election year in the U.S. That means lots of attention being paid to people saying what they think they want us to hear in order to secure election to office. It also means the standard operations of government tend to fade into the background.
Take the Federal budget debate. Most years it would be forefront material, particularly in a year when Congress vowed to make significant cuts to the budget in order to reduce the deficit. With election news grabbing the spotlight every night, though, preliminary discussions have generated very little news.
One item that has been brought up, however, is the proposal to cut a portion of the Federal government’s IT budget. As reported by Nick Hoover in InformationWeek recently, President Obama’s preliminary fiscal budget calls for a relatively slight trimming of the Federal IT budget – only 1.2%. When taken in real dollars, however, that translates to $900 million in cuts to the Federal IT budget. Of that sum, two-thirds of which will come from the IT budget of the Department of Defense…this one year after announcing the largest cyber theft of sensitive material in the history of the DoD.
In offering its reasoning for the cuts to the budget, federal CIO Steven VanRoekel said that as much as $300 million will be satisfied through data center consolidation. He reasons that by centralizing where information is stored the government will require less hardware, less space and fewer personnel to house the data on which it runs. But this raises the question from where will the remaining $600 million come?
An Inconvenient Truth
Obviously, the cuts to the Federal IT budget will need to go deeper than those data center consolidation can afford. Unfortunately, as with any business, when you start cutting funds from budgets it inevitably results in reductions in quality.
As evidenced last year, however, the U.S. government has become a persistent target for cyber terrorism. The cyber theft of 24,000 sensitive files from a defense contractor, the July 4th cyber attack on Department of Energy contractor Pacific Northwest National Laboratory and the infecting of a U.S. Air Force drone by a computer virus all illustrate the dire need to bolster the quality of application software in all facets of government.
By the admission of outgoing Fed CIO Vivek Kundra, the government already has a problem following through on IT projects, which can have an adverse affect on quality. Cutting the funding for Federal IT could further exacerbate this issue. For these cuts to the IT budget to come at a time when assurance of quality is so vital could leave the government more even susceptible to a cyber attack.
Making Quality Self-Evident
In a rather coincidental twist, the answer for both how to cut the Federal IT budget further and address both security and other application software quality issues may lie in the same effort – the effort to cut or at least control technical debt.
As we’ve discussed in many posts here, technical debt represents the maintenance costs to repair issues with application software that occur after deployment – issues that perhaps could have been detected prior to deployment. By bringing technical debt into check, funds that otherwise would have been spent fixing problems could instead be directed to becoming more innovative in protecting the government against cyber terrorism.
Not only would bringing technical debt into check trim the Federal IT budget, it would also mean that money spent on IT matters could be used more efficiently.
But how can government achieve such efficiencies and address technical debt?
The government needs to pay greater attention to the structural quality of its application software and must do so in a highly efficient manner. This means that all entities that work with the aspects of Federal IT – both in-house and outsourced – need to employ some form of automated structural analysis of their systems to detect issues before they result in breach points and outages.