CAST Software: On Quality Blog Because Good Software Is Good Business Tue, 07 Mar 2017 04:34:06 +0000 en-US hourly 1 Because Good Software Is Good Business CAST Software: On Quality Blog no Because Good Software Is Good Business CAST Software: On Quality Blog CISQ Is Helping CIOs Master Digital Transformation Tue, 07 Mar 2017 04:34:06 +0000 published an article detailing the imperative for CIOs to become digital leaders. Research from Gartner confirms that high-performing CIOs are leaders because of their participation in a digital ecosystem. To effectively drive transformational programs, CIOs must have a keen understanding of how digital drives both business and IT success. We are proud to be a sponsor of CISQ’s Software Risk and Innovation Summit on April 27th in New York City, where digital leaders will be discussing the future of digital transformation and DevOps. We hope to see you there! You can register here: In the meantime, have a look at what Mr. Bentz has to say about digital leadership. An excerpt of his article is below. See you in April! === Design is more important than ever. Integration is more important than ever. Architecture is more important than ever. The standards CISQ supports are more important than ever in order to drive transformation while improving IT application quality and reducing cost and risk. Now taking all this and making it easy for the business to consume and agile for IT to deliver? This is where digital leaders are born. At April’s Software Risk and Innovation Summit, we will be talking about this very thing – how CIOs can maximize their digital knowledge and available tools to drive change through IT all the way to the business. Specifically, we are hosting two panels: DevOps and Digital Transformation – how to accelerate DevOps transformation in fast-paced and highly regulated environments The Future of Innovation in IT – how trends in technology, business and policy are influencing IT strategy, thinking and skills The lesson is: don’t delay your growth as a digital leader. With more visibility and opportunity comes more responsibility. Gartner’s 2016 CIO survey showed that IT spending in support of digitalization will increase by 10 percent by 2018 and top performing businesses where digitalization is already “fully baked” into their planning processes and business model will spend 44 percent of their IT budget on digital by 2018. ]]> 0 Are Digital Strategies in Banking Working? Thu, 22 Dec 2016 10:02:31 +0000 TechMarketView round table in London, discussing the effectiveness of digital strategies in banking. It’s no surprise that banks are facing some significant headwinds heading into 2017, including geo-political uncertainties, increased regulation, the need to modernize legacy systems and growing cyber threats. Digital is no longer “just another channel” – it’s essential to success and securing optimal position for the next generation of banking customers. In order to capitalize on opportunities, bank management must establish solid KPIs to create and sustain the right behaviors in a digital environment. Challenges to Digital Success Today, software risk is one of the major sources of profit loss and security exposure for banks. Despite tightening regulations, there are still major technological challenges that legislators have failed to address. Technology risk is less understood by regulatory entities, and system-level risks are most poorly understood at the software layer. As banks continue to build or re-engineer their software assets to meet market demand, they will also introduce new risks into their organization. Within the banking sector, there remains intensifying competition between incumbent major banks, challenger banks and mono-line specialists. The aim of challenger banks is to ensure that the use of alternative banking methods and multiple banking relationships become accepted across the wider market. These banks have an advantage in that they do not have the burden of a universal mandate. Therefore, they can simplify their operations by choosing to decline applicants who would be complex to manage. Banks as a group also find themselves under threat from non-bank digital specialists. Here there remains considerable debate as to whether consumers consider “being a bank” necessary to be entrusted to manage the bulk of a person’s transactions. For example, does Amazon have a trusted enough brand to play the key banking role? Digital pressures are also mounting for corporate and investment banks. In these sub-sectors, client expectations are high posing challenges for bank CIOs and IT managers to modernize and upgrade legacy systems. Digital transformation is critical for this group. No matter the market specialty, operational and technology risk remain chief priorities for banking executives. Cloud Readiness & Legacy Modernization Incumbent banks know that they need to leverage cloud technology to compete with cloud natives in pace of change and innovation. For many, considerable work is required to understand the implications of this method of delivery and enable the building of a satisfactory regulatory framework. There are more significant steps required in the transformation of legacy systems but the move to cloud is a necessary enabler in terms of meeting cost targets and facilitating improvements in user experience. To enable a more smooth transition, CAST has created a Cloud Assessment using industry-proven metrics to ensure stability for key characteristics like structural robustness and efficiency, security vulnerability, architectural compliance and transformation potential. The challenge for the established banks is made much greater by their complex legacy back-ends which need to be able to deliver on the new demands of the digital initiatives. This is particularly true in cloud migration. Simplification of the application “spaghetti” built up over a considerable period of time needs to be done with minimum risk to the ongoing operations of the bank and as cost-effectively as possible. This crucial process requires a deep understanding of the interactions between applications and of the major potential “pain points” in the transformation programme. CAST tackles this complexity with System-Level Analysis, which provides a detailed report on the “health” status of applications in order to build transformation and development strategies. To learn more about CAST, visit You can also read the full version of TechMarketView’s research report, Are the Digital Strategies of the Banks working?, here.   ]]> 0 Why Productivity Measurement Matters Tue, 13 Dec 2016 15:15:59 +0000 Productivity Measurement in the Context of IT Transformation featuring representatives from the retail, banking and insurance industries in the Netherlands. Featured speakers included CISQ, Allianz, BNP Paribas and METRI. Productivity measurement is particularly useful for individuals who lead enterprise governance and measurement programs, in addition to practitioners working on business-critical software. Pragmatic approaches to automated software measurement are more important than ever, especially as the shift to digital continues. As CAST’s Head of Product Development recently wrote in Computer Weekly, the main struggle for IT-intensive organizations as they push toward transformation are the layers of software complexity that have been amassed over years of doing business. Even as new methodologies, like microservices, DevOps and Agile, are adopted to streamline development, instituting automated and standards-based productivity measures is key to success. Software productivity measurement metrics are essential to ensure development teams provide the best value in the shortest amount of time while helping their organizations determine the amount of required input to complete a software project. Standards-based measures, like those supported by CISQ, are essential to define a performance baseline in addition to productivity goals the organization wants to achieve. An important first step in software productivity measurement is to establish a performance baseline and determine improvement goals. Once goals are set, the next step is to determine how teams will deliver functionality using normalized measures. Automated Function Points are a good unit of measure because they are objective, repeatable and can be performed on any application whether it is new or existing. Another important step is to define at which moment measurement should be performed depending on the development methods used. This is where it gets particularly important to use normalized measures in determining functionality delivered by dev teams. Normalized measures will apply regardless of the number of code lines or type of development work completed. As a result, organizations can accurately quantify how much output is provided based on the number of completed business functions. Software productivity measurement is a critical step in the project management lifecycle because it helps organizations to: Identify Current Productivity Levels Assess Delivered Quality Rationalize the Need for Increased Investments Streamline Programming Operations Pinpoint Development Bottlenecks Recognize Underutilized Resources Evaluate Vendor Supplied Value In order to optimize software productivity measurement, it should also account for the organization’s development processes and environment: the number of programming languages, complexity of the application and the type of applications developed. Whether you want to measure the productivity of your development teams, measure the structural quality of your business-critical systems, or prevent software risk from leading to major outages or data corruption, a software analysis and measurement system is no longer an option. For more information about the event and its outcomes, contact us. ]]> 0 Following Best Practices to Achieve Application Security & Reduce Risk Tue, 06 Dec 2016 13:45:05 +0000 Application Intelligence Platform). Over time, your team will be able to establish secure architecture components that should handle all sensitive data. This is a foundational approach to data security and is essential to full security coverage in addition to measuring the security hygiene of development teams and vendors. Instead of hoping that security can be layered on top of weak applications, your development team should be able to demonstrate applications that can be made more secure. Enforcing architectural guidelines can provide a standard of legitimacy for managing vendors as well. Secure application design needs to go far beyond a check-the-box approach of just conforming to minimal regulatory standards. Most organizations don’t even go far enough in compliance to such standards. While there are many software quality metrics that can be measured against a software application, CAST uses a Security/Quality Scorecard based on OMG and CISQ measures for Security, Reliability, Maintainability and Security Debt. Based on observations in the field, the following table suggests potential thresholds: With the rapid evolution of standards, regulations and system complexity, there needs to be a holistic approach to application security. Coding or architectural issues which lead to security vulnerabilities can be some of the most expensive to correct late in the lifecycle. To create a culture of increased security, all parts of the development and stakeholder organizations must be engaged from requirements through operational maintenance. Application portfolios frequently evolve with less attention to inter-application or inter-module discipline where the most critical security flaws occur. The key is to refine metrics that matter to deliver a balanced scorecard reflecting your commitment to secure, low risk applications. Many of the best run companies use a similar set of analytics to reduce software risk, while at the same time reducing costs. The payback is almost immediate. According to the IBM’s Data Breach survey, the average cost of a data breach in 2016 was $4 million. A recent IDC study pegs the average cost of application failure at between $500,000 and $1 million per hour. Not knowing where the weaknesses are is not a valid excuse or successful defense. CAST has put together the tools you need to manage this threat successfully. Together with CISQ, MITRE and leading industry groups, we can help you embed security into your applications. Put us to the test and let us show you how this investment can keep you off the pages of the Wall Street Journal. This is the season for introspection, new resolutions, and finding ways to improve your status quo. Get started with a structural health assessment of your mission-critical apps within 48 hours. ]]> 0 Technical Debt Indexes Provided by Tools: A Preliminary Discussion Tue, 29 Nov 2016 19:05:41 +0000 Technical Debt (TD) indexing as a method to evaluate development projects. We recently presented a paper at MTD 2016, the International Workshop on Managing Technical Debt put on by the Software Engineering Institute at Carnegie Mellon, where we discussed the way five different and widely known tools used to compute Technical Debt Indexes (TDI), for example numbers synthesizing the overall quality and/or TD of an analyzed project. In our analysis, we concentrated on the aspects missing from TDI and outlined if (and how) the indexes consider architectural problems that could have a major impact on architectural debt. The focus on architectural debt is extremely important, since architectural issues have the largest (and most subtle) impact on maintenance costs. In fact, when problems appear at the architectural level, they tend to be widespread throughout the project, making them more difficult to spot and making the project more difficult, tedious, error-prone and ultimately slow. While architecture conformance checking technology is available (and suggested whenever possible), in recent years some knowledge about generally bad architectural practices is consolidating around the term of “architectural smells”. These smells, named as an analogy to the more famous “code smells”, point to a suspect architectural construct and signal trouble in project evolution. Architectural smells are usually detected through the analysis of dependencies. The most famous example, known for many years are Cyclic Dependencies. These make the separation of the components in the cycle impossible, so if they appear in different modules – the modules are actually not separable. Detection of Cyclic Dependencies is supported for example by CAST’s System-Level Analysis and other solutions. Other architectural smells have been defined and are under study, combining dependency analysis with history and evolution analysis to exploit hidden dependencies, or with pattern analysis to identify architecturally relevant elements (e.g., classes) in modules and analyze the impact of their issues on the architecture. In our research, we have already started to gather architectural smell definitions and to implement their detection in a small prototype tool called Arcan (, which currently detects three architectural smells. We are currently in the process of enhancing the implemented techniques through filters and historical information, as explained in our related ICSME 2016 paper, “Automatic Detection of Instability Architectural Smells.” Francesca Arcelli Fontana received her Diploma and PhD degrees in Computer Science at the University of Milano. She worked at University of Salerno and University of Sannio, Faculty of Engineering, as assistant professor in software engineering. She is currently an Associate Professor at the Department of Computer Science of the University of Milano-Bicocca. Her research interests include software evolution, software quality assessment, software maintenance and reverse engineering. She is a member of the IEEE and the IEEE Computer Society. Marco Zanoni received MS and Ph.D. degrees in computer science from the University of Milano-Bicocca. He is a post-doc research fellow of the Department of Informatics, Systems and Communication of the University of Milano-Bicocca. His research interests include software quality assessment, software architecture reconstruction and machine learning. Riccardo Roveda is a PhD student in Computer Science at University of Milano-Bicocca (Italy). He received his bachelor degree in 2011 and master degree in 2014. He is working in the ESSeRE lab at the Department of Informatics, Systems and Communication (DISCo) of University of Milano-Bicocca. He has conducted studies on Architectural Erosion, Technical Debt and correlation between Architectural Smells and Code Smells. Open source addicted. ]]> 0 Legacy Modernization is About Application Security Not Just Cost Thu, 17 Nov 2016 14:20:23 +0000 Yahoo’s apparent cover up of a massive security breach that is damaging its merger with Verizon to the even more recent bank hack in India, where millions of debit cards were compromised, it’s apparent that there are holes in our current defense systems. Adding to the complexity of it all, eWeek has reported that DDoS attacks hit record highs in Q3 2016. For most data-intensive organizations, it would spell disaster if mission-critical or customer information was leaked. What’s more, security gaps are known to go undetected for much longer in enterprises with a high percentage of legacy systems. Many organizations are in the process of digital transformations or cloud migrations to improve operational efficiencies and cut costs. A happy side effect of these modernization efforts is an opportunity to take a good look at application security. Keeping your organization off the front page of the Wall Street Journal requires creating a development culture committed to the reduction of security and quality risks in its mission-critical applications. Despite the visibility of security risk, a surprising number of application developers do not understand how and where vulnerabilities are introduced into the code. Thought leadership groups, like the Consortium for IT Software Quality (CISQ), MITRE, and the Software Engineering Institute (SEI) publish best practices for secure coding, which CAST has embedded into its core products. The Common Weakness Enumeration (CWE) is a list of security-related software vulnerabilities managed by MITRE, the most important of which also form standards such as OWASP, PCI DSS and CISQ. Many of these weaknesses, such as the improper use of programming language constructs, buffer overflows and failures to validate input values can be attributed to poor quality coding and development practices. Improving quality is a necessary condition for addressing software security issues as 70% of the issues listed by MITRE as security weaknesses are also quality defects. Quality is not an esoteric value, it is a measure of risk in the software. Many organizations put QA gates in the software development cycle, however manual inspection only finds about 20% of the defects within individual modules. While this typical 80% miss rate for manual inspection is bad, 50% of security breaches occur at module integration points. Manual inspection misses most of the inter-procedural problems that arise when the system components are integrated. Though workstation code checkers find some of the intra-module defects, almost none of the inter-module defects accounting for 80% of the security breaches are found manually. With only 20% of the code defects uncovered, most of the security flaws will sail through undetected. Unit-level automated code scanners do not detect inter-module flaws either. In practice, 92% of code review findings are unit-level and not dangerous. Only about 8% of findings are inter-module, system-level flaws. These are the issues that need to be addressed during IT modernization projects, prior to fielding the software release, as they are the most serious. For example, the Heartbleed vulnerability defied discovery for 4 years because the flaw was in module interactions. As systems become more complex, the interactions between the systems components introduce numerous points of failure. Both static and dynamic analysis must be part of a quality and security assurance strategy, since detecting many non-functional, structural defects is extremely difficult with traditional testing. Functional testing consumes so much time that little remains for security. Hack testing only finds one way to breach the system. Automated inspection examines all the components for exposure points. CAST helps IT-intensive enterprises though IT modernization projects by illuminating the full-picture view of both application and portfolio-level health. With industry-leading security and risk standards embedded in the Application Intelligence Platform, measuring the Reliability, Security, Performance Efficiency and Maintainability of your software source code is a breeze. Get started with a structural health assessment of your mission-critical apps within 48 hours. ]]> 0 The Insurance Industry Challenge: Improve Software Risk Management Tue, 15 Nov 2016 16:10:56 +0000 software risk management to deliver the value today’s market is after. As recently reported by CAST Research Labs, Insurance organizations face four main challenges to their IT transformation: Insurers still maintain large numbers of COBOL applications. North American Insurers, in particular, have bigger, more complex applications to maintain. Outsourcing applications – which can be attractive from a budgeting perspective – tends to involve lower quality applications. A high percentage of Insurance applications still rely on legacy technology developed and written by people who have since retired from the workforce. Summarized below, the 2016 CRASH Report on Insurance, evaluated global trends in the structural quality of business applications in the Insurance industry. CAST Research Labs measures application structural quality against five primary health factors: Security, Performance Efficiency, Robustness, Transferability and Changeability. The CRASH Report on Insurance was developed with contribution from our partner, CGI. The CRASH findings below also reflect the insights of CGI’s insurance industry experts. Insurers Maintain Large Numbers of COBOL Applications Insurance companies continue to utilize core insurance applications that were originally built in the 1970’s and 1980’s. These systems are never really retired but they are “run off”. Because insurance policies (contracts) on the life insurance side of the industry can be greater than 20 years in term, the systems are preserved to process older policies – hence the high number of COBOL applications. Additionally, the data stored in the system from release to release often changes in definition and the older system is still required to interpret the data. As insurers modernize and re-platform applications for greater agility, it is useful to assess software risk as these re-worked applications move to production. Dealing with legacy code in older systems that have been updated in a piecemealed manner can present security and privacy risks, not to mention an increased possibility of system outage. Overall Quality and Security is Lacking with North American Insurers Insurers, particularly those in the US and North America, are among the least secure when compared to other businesses in the financial services sector. For example, our CRASH Report found 6.8 million violations of code unit level quality rules in the insurance sample, of which 28% received the rating of “high severity.” It can be said the security problem is worsened by US state-based regulatory system, which subjects insurers to 50 sets of differing rules that must be addressed within the legacy systems, often with deadlines that approach far too swiftly for software engineers to update the technology in a secure way. As my colleague, Dr. Bill Curtis stated in a recent press article, “these systems are huge and not well-documented, so it becomes harder to make changes to them. With required regulatory changes, it becomes harder and takes longer to create solutions than it does in the EU, where there is more consistency in regulations.” Outsourced Applications Tend to be of Lower Quality Given the length of time most Insurance applications have been running in-house, the code base has become much larger, and it is frequently customized with ‘bolt-ons’ to address the need for new functionality and regulatory change. There has been minimal investment to optimize or streamline this code that is then outsourced. As IT departments look for cost efficiencies, the size of the code base can become a challenge and precipitate the move to outsourcing. Mainframe costs are significant, and more MIPS means significantly more costs for hardware and operating systems. But as applications are outsourced, the need for transparency and measurement does not dissipate. It is important for the client and vendor to set goals and priorities against nonfunctional requirements that might drive business risk and long-term cost. The best practice is to establish metrics against corporate or industry quality and software risk standards, like those published by CISQ. Legacy Technology and Attrition Challenges Insurance carriers are aware that their legacy environments are more costly to operate and maintain than modern systems. However, much of the data stored in these systems is difficult to analyze and migrate to newer environments due to the attrition of staff with knowledge of the legacy systems and data structures of many releases of the software. Migration is costly. As an alternative, carriers opt to buy time and run off rather than retire the systems. System-level analysis and software measurement can play an important role in helping insurers understand the true cost of these legacy systems as well as the benefit of transition to modern IT environments. To download the full 2016 CRASH Report on Insurance, click here. ]]> 0 See Through the Technology Framework Cobweb to Rationalize IT Projects Fri, 04 Nov 2016 16:14:03 +0000 Project Portfolio Management level. Because microservices is still a relatively new concept, applications developed in this manner still tend to be siloed from the rest of the organization. This can lead to a discrepancy in the consistency of frameworks used within a single IT shop. As IT organizations look to become more agile and interoperable, it is important to ensure that: Applications which share the same technical needs (for example, data persistence management) also rely on the same set of frameworks. Why do three related applications need to use Spring MVC, OpenXava and Apache Struts to perform the same command? Multiplying frameworks which are intended to do the same job increases maintenance cost at application portfolio level but also affect your tech talent acquisition strategy. There is a minimum version discrepancy for frameworks implemented in different applications. This is key to anticipate and rationalize migration efforts and guarantee that all applications receive the latest bug and security fixes. In order to combat framework discrepancies, some IT shops adopt offline approaches to manage this potentially huge list of languages used. However, manual lists present several challenges: They are non-exhaustive and are painful to keep up-to-date. This non-automated process presents big challenges for DevOps teams who require accurate information to communicate on and optimize the infrastructure and technical software roadmap. Overcoming Framework Challenges with CAST Highlight CAST Highlight presents a unique value-add for IT project managers and software developers who want to gain full transparency and control over their frameworks. Highlight’s quick portfolio-level IT health check conducts inventory of applications to illuminate the application’s structure in addition to an overview of application risk, complexity and cost. This is a huge benefit, especially for organizations grappling with cloud migration and digital transformation initiatives. For example, CAST Highlight can: Conduct code-level and fact-based information checked by application owners. Many Highlight users end up saying “Wow, I didn’t know my app was using this library, and this is the only app using it in the entire portfolio!” Detect more than 100 frameworks and libraries used in JEE, Javascript, .Net, C/C++ and PHP projects such as Hibernate, AngularJS, Backbone, Qt, Zend and Symfony (see the full list here). Aggregate framework information into a dedicated dashboard based on technology, usage type (for example, web/mobile, UI framework, logging, API management, etc.), license type (LGPL, MIT, proprietary, etc.) and version numbers. CAST Highlight aggregates essential data at the portfolio level to help App Managers and DevOpsteams quickly identify framework redundancies, establish technical roadmaps, reduce the heterogeneity of framework landscapes to rationalize integration cost, and better allocate resources while defining required skills for future hiring. To learn more about how Highlight can streamline your framework and application development processes, check out our free trial offer. ]]> 0 CAST Celebrates 25 Years of Customer Success at Oktoberfest in Munich Wed, 19 Oct 2016 14:06:46 +0000 software measurement is critical to the success of their IT projects. Key System Integrators partners also presented ways to help customers with technical problems or aspirations to change, like digital transformation, IT modernization and cloud readiness. They shared best practices for address challenges to change and using CAST metrics as a benchmark to move forward. The most popular topics during the conference included Cost Efficiency, Application Security, IT Transformation, Quality Assurance, Vendor Management, Benchmarking and KPI (Key Productivity Indicators). During the round table discussions, all attendees had the possibility to raise questions and lively conversations took place. Above all, it was exciting to see our customers make progress toward their transformation goals using CAST and industry standards to guide the way. CAST is committed to give the best support to overcome modern business and IT challenges raised, and we have been for more than 25 years. To top off the valuable conference sessions, we celebrated at Oktoberfest. The evening was filled with great conversations and engaging dialogue, under the backdrop of Bavarian beer and traditional music. Thank you again to our customers and partners who made this event such a success. We look forward seeing you next year again in Munich! To keep up with us in the meantime, check out our events page. ]]> 0 IWSM Mensura: Software Measurement Makes Data More Valuable Tue, 11 Oct 2016 18:59:35 +0000 IWSM Mensura 2016 in Berlin, hosting software measurement professionals and researchers from all over the world to discuss maximizing the value of data. With digitalization trends, there is more data than ever before in software applications and systems, and that data is expected to drive business value. Software measurement is the key to making this data actionable. A vital step that’s often overlooked is application benchmarking. Application benchmarking provides an effective baseline to measure progress, quality and productivity. Before starting digitalization, modernization or even cloud migration initiatives, both the business and IT units should start with an objective benchmark, set at the outset of the project in order to be most successful. In our session “Measuring Software Quality with CAST,” Daniel Maly, Managing Director of CAST DACH, and myself spoke about the value Highlight and CAST Application Intelligence Platform bring to successfully managed software development and modernization projects. Highlight can give a “birds-eye view” of broad spectrum application portfolios, helping establish modernization priorities and identify hot areas of risk. CAST AIP takes that process a step further by conducting system-level analysis to assess the true quality and risk potential that something like digital transformation will bring to the organization. There was particular interest in how CAST aides automated function point counting to further modernization initiatives. In fact, at the end of the three-day conference, the title awarded best paper was “Functional Size Measurement Patterns: A Proposed Approach.” By automating the function point counting process, CAST makes it possible to quickly and precisely measure system complexity and software productivity – two measures that have a clear impact on the bottom line. Another topic discussed at the conference was how the true value of a company is increasingly hiding in its data. This hidden data can only be used to its fullest if it is made visible and subsequently mapped across the entire value chain. CAST, along with other leading quality institutions like CISQ and the Software Engineering Institute, are working to make the invisible visible. To learn more about how CAST can help you take control of your transformation projects, contact us. ]]> 0