Improving Code Quality in DevOps

We welcome guest blogger Bill Dickenson, an independent consultant and former VP of Application Management Services for IBM, who brings decades of experience in application development and DevOps. Dickenson’s post below discusses how using CAST’s automated software analysis and measurement solutions helps achieve the benefits of DevOps, while eliminating the risks.
The recent move to cloud based development/operations (DevOps) is changing the testing and development lifecycle by accelerating the speed that code can migrate from development, through testing, and into production. Cloud based testing environments can be instantiated and refreshed at an unprecedented speed.

Getting SaaS-y about Technical Debt

I came across that old phrase, “Why buy the cow when you can get the milk for free?” the other day, in the context of marriage.  Why should people marry when they can just live together?  Well, you can imagine I came across a lot of opinions I won’t go into here.
An article by Naomi Bloom popped up, using the phrase in a technology context. She noted that vendors of traditional licensed/on-premise enterprise software had served themselves very well by forcing customers to buy both the apps as well as owning the data center and its operations, application management, upgrades, human resources, and more. This has provided traditional vendors considerable financial security and market power.
Clearly Defining the Cloud
Today’s multiple forms of cloud computing are changing all that, but we need to be careful of what passes for cloud computing, especially SaaS. Software marketers are rebranding all their products as SaaS, whether they really are or not, to take advantage of the latest ‘buzzword.’ Bloom notes that “true” SaaS must include four characteristics:

Software is made available to customers by the vendor on a subscription model;
The vendor hosts, operates, upgrades, manages and provides security for the software and all data;
The software architecture is multi-tenant, has a single code base and data structures, including metadata structures that are shared by all customers; and
The vendor pushes out new releases on a regular basis that are functionally rich and opt in.

Keep in mind that software can meet all these attributes and be “true” SaaS, but still be badly written, unprofitable, outdated or problematic in other ways. However, when well-architected, well-written and well-managed, true SaaS can provide many benefits, including improved economics, faster time-to-market, more frequent and lower-cost upgrades, greater value-added and/or new offerings, and improved agility.

SaaS Doesn’t Eliminate Technical Debt
One quality even true SaaS shares with traditional on-premise software is technical debt. Another benefit of the SaaS model not listed above is the continuous review of the software by multiple users, which can clue-in the vendor to issues with the code that impact performance.
There’s also a new generation of cloud-based portfolio risk analysis solutions that quantify the size and structural quality of applications, evaluate technical debt and offer information and insights that support investment decision-making. These solutions can provide continuous monitoring of the SaaS solution as well as risk and application analysis. Then, the vendor can implement a risk-based APM strategy faster, enabling better and safer decisions, portfolio modernization and transformation. It also profiles structural quality risk, complexity and size of any application to identify unexpected risks. Finally, it quantifies technical debt to proactively prevent software cost and risks from spiraling out of control.
If users think they are going to eliminate technical debt by moving to a SaaS model, their thinking is cloudy. But there are solutions to identify and help address technical debt for SaaS architectures that are just as robust as their on-premise counterparts.

IT Must Keep Up with Electronic Medical Records

Imagine this scenario: your spouse, child or loved one is critically ill and is transferred from hospital to hospital, in search of that “House-like” diagnosis that will bring a cure, or at least remission. Think about the physical pain, the mental anguish, the uncertainty.
Now, layer onto that getting pushback from each hospital on releasing medical records. One hospital says it will forward the records in with 21 days, another says it will release the records, but at a cost of $1 per page, a third simply stonewalls.
Twenty-one days, really!? One dollar per page!? Are you kidding!?
Maybe it’s because I work in the technology industry that every time I walk into a doctor’s office and see the archaic shelves of paper files, I cringe. Hopefully, the doctor’s medical training is more up to date than his or her filing system. It makes me think of the time broke my nose and went to the emergency room. There were two entrances, “Walk In,” and “Ambulance Only.”  Under the “Walk In” sign, someone had scrawled, “Walk In, Crawl Out.” I almost turned around and went home.
As Lauren Drell notes in a recent post, there’s no ONE place to find anyone’s complete and comprehensive medical history. As a result, each time you go to a new doctor, you spend time filling out forms you’ve already completed at your previous doctor’s office, not to mention the forms often ask redundant questions. The new doctor may then take bloodwork or conduct other tests that your previous doctor already completed. Why? Because in our litigious society, doctors are terrified of malpractice suits (don’t get me started), but also because the process for sharing medical records is paper-based and slow.
What if the medical records system was digitized and all critical information was in one place that any doctor with the correct security?  That place should, obviously, be the cloud.
Storing Your Records in the Cloud
Happily, the Obama Administration realizes this and has earmarked $20 billion under the American Recovery and Reinvestment Act (ARRA), to accelerate the transition to electronic healthcare records.
Such systems actually already exist, but they are inefficient and difficult to use, as well as incredibly expensive – approximately $50,000 per doctor each year. It’s actually easier and cheaper to use paper. But Lauren Drell highlights a newer company, Practice Fusion, which has already taken 120,000 healthcare professionals digital, uploading more than 22 million medical records in the cloud. These are now accessible anywhere there’s a digital connection to medical professionals with the appropriate security credentials.
Practice Fusion is aiming toward medical record nirvana: each person having a single medical record from birth to death, containing all information, stored in a HIPAA-compliant location.
Accurate, Accessible Records Save Lives
The company’s CEO, Ryan Howard, notes that nearly 200,000 people die each year from preventable medical errors.  Comprehensive medical records could dramatically reduce that number, translating to lower malpractice premiums.
Patients gain more control over their health, medical professionals can rapidly understand critical medical information whether in an office, helicopter, ambulance or operating room. Doctors are able to serve more patients with better-quality care based on a complete picture of each patient’s history.
Digitizing records also holds promise for public health professionals, enabling them to study trends in patients with specific diseases or disorders. Organizations such as the CDC can study more effectively control disease outbreaks by identifying commonalities among those afflicted.
A Cloud-based Records Strategy Starts with Software Quality
Critical to an effective, cloud-based Electronic Medical Record (EMR) solution is high-quality software that facilitates the rapid discovery and transfer of critical records, files that are often enormous. High-quality software also helps protect files from cyberattacks. Automated software analysis and measurement solutions assist developers in creating EMR solutions more efficiently and cost effectively as well as monitor team performance.
Maybe the reason House is so cranky is he’s sick of dealing with paper records?

The Chaos Monkey

Sometime last year, Netflix began using  Amazon Web Services (AWS) to run their immensely successful video streaming business.  They moved their entire source of revenue to the cloud. They are now totally reliant on the performance of AWS.
How would you manage the business risk of such a move? Stop reading and write down your answer. Come on, humor me. Just outline it in bullet points.
OK, now read on (no cheating!).
Here’s what I would have done. Crossed my fingers and hoped for the best. Of course you monitor and you create the right remediation plans. But you wait for it to break before you do anything. And you keep hoping that nothing too bad will happen.
Obviously, I don’t think like the preternaturally freakishly smart genius engineers at Netflix. In the Netflix Tech Blog, here is how they describe what they did (I read about this first in Jeff Atwood’s blog, Coding Horror).
“One of the first systems our [Netflix] engineers built in AWS is called the Chaos Monkey. The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most – in the event of an unexpected outage.”
This is what proactive means! How many companies have the guts backed up by the technical prowess to do this (not to build the Chaos Monkey but deal with the destruction it leaves in its wake)?
It dawned on me that IT systems are constantly bombarded by countless chaos monkeys, or one chaos monkey if you prefer, controlling hundreds of variables. The best way to get ahead is to simulate the type of destruction these chaos monkeys might cause so you can be resilient rather than reactive to monkey strikes.
And strike the monkeys will. Especially when software is changing rapidly (Agile development, major enhancement, etc.). In these conditions, the structural quality of software can degrade over a period of time and over iterations or releases.
So I built a chaos monkey to simulate this deterioration of structural quality as an application is going through change. Here’s how it works.
An application starts with the highest structural quality – a 4.0 (zero is the minimum quality score). At the end of each iteration/release/enhancement one of three things might happen to this value:

It might, with a certain probability, increase [denoted by Prob(better quality)]
It might, with a certain probability, decrease [Prob(worse quality)]
It might, with a certain probability, stay the same [Prob(same quality)]

Of course we don’t know what each of these probabilities should be – once you have structural quality data at the end of each iteration for a few projects, you would be able to estimate it. And we don’t know how much the structural quality will increase or decrease by at each iteration, so we can try out a few values for this “step” increase or decrease.
After 24 iterations here is where the chaos monkey has left us.
 
The Structural Quality Chaos Monkey
Because structural quality is at the root of visible behavior, it can be difficult to detect and monitor in the rush and tumble of development (or rapid enhancement). Even when structural quality drifts downward in small steps, it can quickly accumulate and drive down an application’s reliability. It’s not that you or your teammates lack the knowledge; you simply don’t have the time to ferret out this information. Business applications contain thousands of classes, modules, batch programs, and database objects that need to work flawlessly together in the production environment. Automated structural quality measurement is the only feasible way to get a handle on structural quality drift.
Once you know what to expect from the chaos monkey, you can build in the things you need to do to prevent decline rather than be caught by surprise.
Long live the chaos monkey!