QAI QUEST: Fixing Quality Issues with Automated Code Review

John Chang, Head of Solution Design, CAST Software at QAI QUEST, 2016
Recently I had the pleasure of speaking at QAI QUEST 2016, which showcases the latest techniques for software quality measurement and testing. It was a content-rich program with more than three days of diving deep into issues like DevOps, Open Source, Security Mobile and more. But what struck me the most above all the event chatter is that even the brightest of companies are still having a difficult time identifying and fixing code quality errors.
During my keynote, I spoke about the perils of system-level defects and how these defects, when they go undetected, can completely ruin ingenious application development strategies. There are two key reason these bugs persist: decentralized development practices and a lack of automated code review standards.

Using Code Quality Metrics to Improve Application Performance

For years refactoring software has been a common process used to improve the quality, efficiency, and maintainability of an application. However, a recent article by IT World discusses how CIOs may not be getting a valuable return on their investment of time and effort into the refactoring process. While many believe refactoring reduces the risk of future headaches, new findings acquired through a study by Sri Lanka researchers suggests code quality is not improved significantly by refactoring.

CAST User Group on Function Point Analysis: Key Findings

On April 6th, CAST held a user group meeting on the topic of function point analysis and software productivity measurement. The meeting gathered more than 20 software measurement professionals from major companies in the banking, IT consulting, telecom, aviation and public sectors for a two-hour working session to discuss the benefits of function point analysis testing.
The event featured presentations including:

An IBM case study on how they worked with CAST to integrate and secure an Automated Function Point (AFP) approach with a big player in the aeronautic sector within TMA Systems
Functional sizing case study
Updates on the new CISQ standards for Automated Function Points
The importance of internal and external benchmarking

CISQ & IT Risk Management: Minimizing Risk in Government IT Acquisition

On March 15, CISQ hosted the Cyber Resilience Summit in Washington, D.C., bringing together nearly 200 IT innovators, standards experts, U.S. Federal Government leaders and attendees from private industry. The CISQ quality measures have been instrumental in guiding software development and IT organization leaders concerned with the overall security, IT risk management and performance of their technology. It was invigorating to be amongst like-minded professionals who see the value in standardizing performance measurement.

Software Risk Management: Risk Governance in the Digital Transformation

Software Risk Management in Digital Transformation was the focus during the 4th edition of the Information Technology Forum, hosted by International Institute of Research (IIR).  Massimo Crubellati, CAST Italy Country Manager, discussed how Digital Transformation processes are changing the ICT scenario and why software risk management and prevention is mandatory.
 
Massimo shared our recipe for Digital Governance evolution: including a specific ICT Risk chapter in the design of the governance structure of the digital transformation, whose most relevant aspect is to determine which methods and through which key performance indicators to measure the operational risk inherent in the application portfolio. Measurement needs to be continuous and structural, it must include the assessment of application assets inherent weaknesses, through the analysis of correlations between the layers composing them. Thus obtaining, not only an effective prevention of direct damage ensuring the service resilience, but a reduction in maintenance and application management costs.

A Code Quality Problem in Washington State Puts Dangerous Criminals Back on the Street

We always hear about issues with systems, applications, or services caused by poor code quality or missed defects, but what happens when these problems become life threatening? Recently an article posted by npr discussed the early release of dangerous prisoners who are now being charged for murder. According to the article, Governor Jay Inslee of Washington State reported that more than 3,200 prisoners were released early due to a software defect.
This was not a result of good behavior, but rather an issue caused by a software glitch within the Department of Corrections. As reported by the governor’s general counsel, Nick Brown, approximately 3% of the occurrences since 2002 should not have been allowed. This software glitch has gone unnoticed for more than 10 years and as a result dangerous criminals have made their way back into society.

The HSBC Failure Has Many Wondering: Are Banking Providers Taking the Appropriate Measures to Ensure Code Quality and System Dependability?

The banking industry has definitely had its share of ups and downs when it comes to service reliability. In the past year, there have been a number of instances where customers have been unable to gain access to funds, receive deposits, and pay bills. As reported in an article by theguardian, HSBC experienced a system failure at the end of August, which left thousands of their customers in a bind over a major banking holiday.
 
This “technology glitch”, as reported by HSBC, prevented customers from being paid their salaries. The reported system failure made it impossible for employers to access their business banking accounts. A staggering number of banks have experienced system failures and service issues like this one. This raises a question: Is poor code quality becoming a big problem for the banking industry?