The State of Cloud Adoption in Insurance – Look Out for Migration Bumps Ahead!

Insurance companies still spend a lot of money maintaining the infrastructure for their core legacy apps. This is an opportunity cost for them as they could have spent that money to change and innovate on other fronts like new product development, distribution channels and customer experience.

Who’ll Stop the Rain: Seeking Quality in the Cloud

It’s nearly impossible these days to pick up a trade publication covering the tech industry without reading something about cloud computing. The plethora of coverage is enough to make one think that cloud computing is the latest technological panacea, good for everything from live data storage to data archiving and all enterprise needs in between.
Spending on cloud solutions also bears out its popularity. Late last year, Ian Song, senior research analyst for IDC’s Enterprise Virtualization Software program, placed global spending on cloud initiatives in 2009 at roughly $17 billion.  He predicted this figure would rise at an average annual rate of 26 percent and reach $44.2 billion by 2013 – nearly a three-fold increase!
David Linthicum, CTO and founder of Blue Mountain Labs and a blog contributor at Forbes.com, calls what is taking place a “migration to cloud computing” and says for enterprises in the modern era it is “ultimately a matter of survival.”
What is particularly interesting about Linthicum’s account is his noting that this migration is happening as much from the ground up, being experimented with inside of corporate projects, as it is coming as a directive from the executive boardroom. Linthicum postulates:
…you can consider the migration to cloud as coming in two directions: Bottom up from the projects, and top down from IT leadership that’s strapped for cash. Figure they will meet in the middle sometime around the end of next year; IT leadership will adopt cloud once it’s been proven at the project levels, and the core driver will be cost.
Reducing Static
While enterprises are starting to become more amenable to cloud solutions, cloud computing has been, since its inception, a wise choice among smaller companies who see the use of “software as a service” as a way to reduce costs, while allowing for future expansion and scalability. Along the same lines, cloud computing could be a useful tool to assist in the structural analysis of application software.
Automated analysis and measurement – arguably the best solution for assessing structural quality of application software – has historically been seen as the province of larger enterprises and development teams where multiple hands are writing hundreds of thousands of lines of code. What of the smaller software companies, mobile app developers or even the independent developers? They, too, could derive benefit from the ability to perform static analysis of their software or customization projects.

A Silver Lining
A cloud portal version of automated analysis and measurement could be the “silver lining” for the smaller developers and companies looking to improve upon the structural quality of their software. For them, such a portal would give them the ability to run their applications through a cloud-based portal to detect structural flaws and test them for the key health factors of application software quality: security, changeability, transferability, robustness and performance. A cloud portal version of automated analysis and measurement would also be a boon for mobile developers, not to mention app stores, who could ensure their smartphone end users that the software they were downloading to their devices is safe and robust.
Whatever way you float it, the sky could be the limit for cloud computing and its ability to yield QaaS – QUALITY as a service.

The Chaos Monkey

Sometime last year, Netflix began using  Amazon Web Services (AWS) to run their immensely successful video streaming business.  They moved their entire source of revenue to the cloud. They are now totally reliant on the performance of AWS.
How would you manage the business risk of such a move? Stop reading and write down your answer. Come on, humor me. Just outline it in bullet points.
OK, now read on (no cheating!).
Here’s what I would have done. Crossed my fingers and hoped for the best. Of course you monitor and you create the right remediation plans. But you wait for it to break before you do anything. And you keep hoping that nothing too bad will happen.
Obviously, I don’t think like the preternaturally freakishly smart genius engineers at Netflix. In the Netflix Tech Blog, here is how they describe what they did (I read about this first in Jeff Atwood’s blog, Coding Horror).
“One of the first systems our [Netflix] engineers built in AWS is called the Chaos Monkey. The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most – in the event of an unexpected outage.”
This is what proactive means! How many companies have the guts backed up by the technical prowess to do this (not to build the Chaos Monkey but deal with the destruction it leaves in its wake)?
It dawned on me that IT systems are constantly bombarded by countless chaos monkeys, or one chaos monkey if you prefer, controlling hundreds of variables. The best way to get ahead is to simulate the type of destruction these chaos monkeys might cause so you can be resilient rather than reactive to monkey strikes.
And strike the monkeys will. Especially when software is changing rapidly (Agile development, major enhancement, etc.). In these conditions, the structural quality of software can degrade over a period of time and over iterations or releases.
So I built a chaos monkey to simulate this deterioration of structural quality as an application is going through change. Here’s how it works.
An application starts with the highest structural quality – a 4.0 (zero is the minimum quality score). At the end of each iteration/release/enhancement one of three things might happen to this value:

It might, with a certain probability, increase [denoted by Prob(better quality)]
It might, with a certain probability, decrease [Prob(worse quality)]
It might, with a certain probability, stay the same [Prob(same quality)]

Of course we don’t know what each of these probabilities should be – once you have structural quality data at the end of each iteration for a few projects, you would be able to estimate it. And we don’t know how much the structural quality will increase or decrease by at each iteration, so we can try out a few values for this “step” increase or decrease.
After 24 iterations here is where the chaos monkey has left us.
 
The Structural Quality Chaos Monkey
Because structural quality is at the root of visible behavior, it can be difficult to detect and monitor in the rush and tumble of development (or rapid enhancement). Even when structural quality drifts downward in small steps, it can quickly accumulate and drive down an application’s reliability. It’s not that you or your teammates lack the knowledge; you simply don’t have the time to ferret out this information. Business applications contain thousands of classes, modules, batch programs, and database objects that need to work flawlessly together in the production environment. Automated structural quality measurement is the only feasible way to get a handle on structural quality drift.
Once you know what to expect from the chaos monkey, you can build in the things you need to do to prevent decline rather than be caught by surprise.
Long live the chaos monkey!

Cloud – Measure Quality Before You Migrate

A recent Booz Allen economic study highlights two key drivers of cost savings in a cloud environment: the speed at which you migrate your applications to the cloud and the extent to which you can reduce the internally-managed infrastructure supporting these migrated applications. The faster the better; the greater the reduction of internally-managed infrastructure the better.
But even before you get to the speed of migration, there’s a more fundamental question to answer: Which applications are suitable for migration to the cloud? Answering this question will depend on the specifics of your application portfolio and your (and your business’) appetite for risk. In particular, you’ll have to find the right balance between cost reduction and performance and security issues.
Here is how CAST’s quality metrics can inform your migration plan.
A CAST analysis can determine which applications are ready for cloud migration and vet the performance of those applications before you put it on the cloud. Once on the cloud, CAST enables you to painlessly monitor your application’s performance to ensure you are not wasting your money.
a. Understand how well the application will perform, measure robustness, performance, and security. Some of these factors can be exacerbated in a cloud environment; best to know that before you migrate. Also keep in mind that cloud hosts can kick you off the cloud if they determine that your application puts others on the cloud at risk.
b. In measuring quality, CAST can also quickly highlight and measure the drivers of application costs. Some of these cost drivers remain whether you’re on the cloud or not, so it’s important to know about them from the start.
c. If cloud is a cost cutting measure, use CAST metrics to make sure you’re not burning more MIPS, using more memory, and transferring more data than you should.
Use application quality information to make the best migration decision.

Cloud(s) Gathering Over DC

Yesterday CAST  hosted a breakfast for IT leaders in government agencies – one of the themes was moving applications to the Cloud.  I was surprised (and relieved) to see so many agencies represented – FEMA, DHS, SEC, NASA, and the Defense Intelligence Agency just to name a few. 
Based on yesterday’s conversation and the questions from the audience, I’m even more convinced this is something companies are going to keep asking us about.  The considerations around reliability, security and maintainability don’t change when you move applications to the Cloud – if anything they become more serious. 
I’m certain this is going to be a theme for 2010 and luckily for me, CAST belongs in this conversation.
Our customers will need guidance on which applications make sense to move and which don’t – and how to make sure those applications they migrate to the Cloud don’t pose risks or compromise performance. 
To hear more about our thoughts on cloud, see Jitendra’s knowledgeable and clever post, ‘Cloud – is that something you might be interested in?’
If you’re interested in the presentations from yesterday, comment and I’ll send them to you.  Because it was so popular we might record a web session, but for the private sector. Stay tuned!

Cloud – Is that Something You Might Be Interested In?

Recently, an Australian team studied the performance of the Amazon, Google, and Microsoft Clouds. The results reminded me of Bob on Entourage.
The results are not surprising. The on-demand cloud services from these companies “suffer from regular performance and availability issues.”
Now, not to make too much of this — we already know that blazing performance on the cloud is neither a promise these vendors make nor an economic reality. After all, if you want cheap AND scalable, something’s got to give.
But you can be prepared.
If you could precisely measure the performance and availability of an application on the cloud, would that be something you might be interested in?
If you could do this before you migrated to the cloud, would this be something you might be interested in?
If you’re a vendor of Cloud services, would you be interested in tracking not just usage, but quality of service?
Well, you can. In what follows I’ll show you exactly when and what to measure for optimal migration.

1. If you manage an IT organization, measure application quality before you move it to the Cloud. Software quality metrics will determine which applications are ready for migration to cloud and vet the performance of those applications before you put it on the cloud. Once on the cloud, these same quality measures enable you to painlessly monitor your application’s performance to ensure you are not wasting your money.

a. Understand how well the application will perform, measure robustness, performance, and security. (Cloud hosts can kick you off the cloud if your application puts others on the cloud at risk.)

b. When you measure quality, you quickly highlight and quantify the drivers of application costs.

c. If cloud is your path to cost cutting, use these quality metrics to make sure you’re not burning more MIPS, using more memory, and transferring more data than you should.

2. As a SaaS/Cloud vendor, providing quality metrics to your customer  differentiates you from the competition.

a. Measure and communicate the quality of your SaaS/Cloud environment to current and potential customers.

b. Use application quality metrics to demonstrate the measurable cost of quality of your services.

I’d be glad to tell you more – just email me. Or go here.
Now Bob might be a parody of himself, but he really gets to the core of what matters. In software, it’s the only thing that matters in the end is the product, the stuff, aka the code. It’s so difficult to measure that people get frustrated.
But it’s something you should be interested in.