CAST Software: On Quality Blog Because Good Software Is Good Business Wed, 21 Jun 2017 13:58:06 +0000 en-US hourly 1 Because Good Software Is Good Business CAST Software: On Quality Blog no Because Good Software Is Good Business CAST Software: On Quality Blog How to Manage Software Risk and Cost in Digital Transformation Wed, 21 Jun 2017 13:52:14 +0000 How to Control Software Risk and Cost in Digital Transformation, in Madrid with CISQ and Dr. Richard Soley, CEO and Chairman of the Object Management Group. Dr. Richard Soley, CEO of OMG A main objective of successful companies today is to improve the experience of their customers and become more competitive by taking advantage of modern technologies. This leads to corporate-wide Digital Transformation initiatives where IT managers play a key role in managing risks and costs associated with the transformation. Yet, software complexity has emerged as one of the most significant barriers to transformation success. Software complexity, combined with the need to reduce time to market, can often increase the risk and cost attached to software development and maintenance, especially those related to security, robustness, efficiency and maintainability. Jaime García Cantero, IT Analyst, opened the session illustrating the context of Digital Transformation in which organizations are currently immersed. Dr. Richard Soley and Paul Bentz, Director of Government and Industry Programs at the Consortium for IT Software Quality, shared how OMG and CISQ are solidifying software quality standards used for structural quality analysis, benchmarking, functional size measurement and their applicability to SLAs and vendor relationships. In conclusion, they discussed why measurement is important to transformation success. Following presentations from OMG and CISQ, attendees heard from an expert panel, including Ana Arredondo, Head of IT at Oficina Española de Patentes y Marcas, Carlos Castellano, CIO at AVIVA Servicios Compartidos, Araceli García, Global IT Country Service Lead at IAG GBS Spain, Alejandro González, Director of Innovation and Digital Business at Grupo Caja Rural, and Francisco Pomar, Director of Architecture and Methodology at Banco Popular. The panel addressed: Software security and risk management, based on objective metrics Software complexity management from a technology and architecture point of view Digital Transformation enablers ]]> 0 Netflix Envy Mon, 12 Jun 2017 16:59:24 +0000 talk about DevOps these days. I guess you’ve noticed that too, if you have anything to do with tech and haven’t been living in the woods the last three years. I spoke on a panel a few weeks ago at the MIT CIO Symposium called Running IT Like a Factory. One of my co-panelists talked a lot about cloud-native companies, and how Netflix does 3,000 releases per month and Amazon does 11,000 releases per year. He also referenced the robustness of AWS and how companies like this can create a ton of value very quickly. Netflix and Amazon both have a market cap that’s the envy of any large enterprise. Feedback from the media and analysts isn’t helping. By comparison, many enterprise executives are made to feel that the work their IT organization is doing just doesn’t measure up to what Silicon Valley startups can build with small teams of pimply teenagers in their garages. This is unfortunate. Whether Amazon is releasing apps at a rate of 11 per hour or 11,000 per year, the message to IT pros is the same: Theirs goes to 11. Yours probably only goes up to 3. Somehow release rate and cycle time seem to have become the vogue metrics for application development. If your cycle time is nice and short, you’re like the cloud-native Silicon Valley geniuses. This is what we, in the measurement world, call vanity metrics. But, since Wall Street values cloud-native startups on a different scale than Fortune 100 enterprises that generate revenue, we might as well turn our app dev metrics upside down. What bothers me is that we are presumably a community of smart people and technologists who are connected digitally (by Twitter, the media, etc.), but some reason we behave like a herd of sheep and buy into foolish conclusions fanned by mass hysteria. How could the number of releases be a relevant measure for anything? What if I screw something up five times a day and have to re-release it with that same frequency? What if each release is just a tiny little tweak? How can I compare that to a release of a critical application that carries in it a synchrony three dozen projects across the enterprise? The biggest fallacy here is that these cloud-native companies actually have systems that require any significant level of resilience. Nor do they have a truly significant level of software complexity. Netflix and Amazon don’t have anything near the mission-critical systems of, say, a bank. You might object that the Amazon ecommerce site is mission critical. Sure. It’s pretty critical. But, if they add a new feature and it makes you lose your order, or send you the wrong item, then Amazon will just make good by giving you that item for free. You’ll be happy and Amazon will roll back and fix their bug. And that’s worst case. In most cases, the site will just have yet another glitch that we won’t even notice due to the level of glitchiness we’ve come to expect from web apps. And don’t even get me started on the mission criticality of Netflix’s systems. If you’re a bank and you screw up a transaction, that will be noticed. If you do it enough times, it will be reported by the Wall Street Journal. Your regulators will breathe fire on you and your customers will rather hide their money in their mattresses. Slightly higher stakes, I think. I know this is a bit of a rant, but the point is that we can’t blindly follow the same metrics in all scenarios, and we can’t compare apples to oranges. Cycle time and release rate metrics do tell us something of relevance, but it’s far from the whole story – even when considering throughput. And rolling out canary releases while tracking MTTR for your fast fails is nice, but it doesn’t fit every business scenario. We’ve been seeing Netflix and Amazon both showing up at architecture and app dev events, looking for solutions to deal with their growing system complexity. It’s nice to be lightweight and greenfield. But eventually – and usually it’s when you start to actually generate revenue – your technical debt catches up and things stop being so simple and carefree. This is when the real work of managing your technology landscape begins. Our cloud-native friends are starting to come full cycle. At some point they too will look like legacy. Let’s hope they are building robustness and changeability into their code bases. Else they will also suffer next-gen envy. ]]> 0 Need for Holistic IT Systems’ Risk Assessment Wed, 31 May 2017 17:50:55 +0000 caused the UK’s national carrier to cancel all its flights worldwide at the start of May bank holiday along with the WannaCry ransomware attack which ground the National Health Service to a halt have exposed again the importance of IT systems in today’s business. The complexity of these IT systems, the number of vulnerabilities that exist in critical software used by critical infrastructure sectors such as the NHS, airlines, telecom operators has made headlines once more. The reality is that this situation is set to become even more critical unless we tackle the core issues. With rapid technological changes such as the Internet of Things (IoT), Artificial Intelligence (AI), Automation and Robotics, the problems are set to accelerate for these IT systems that are already complex, legacy, and outsourced. Macro-economic changes like Brexit, and the disruption technology and regulatory changes are causing in sectors like banking, telecoms, retail and airline, are all compounding each other to create pressure on IT departments like never before. The blame game for the BA outages had already started with the unions blaming BA’s IT management for outsourcing jobs to Indian outsourcing firms. As for the WannaCry hack, NHS Trusts were called negligent for not patching a known security vulnerability. But there are some core underlying causes we should address if we want to make this country the FINTECH leader for the world or realise our dream of building the next Google, Apple, Facebook or Amazon. Culture We see different issues across management levels in organisations. However, the need for a stringent software engineering mindset and discipline in this country is a common thread. The latest CRASH report from CAST Software that analysed 2B+ Lines of Code (LOC) across 400+ organisations globally has found that the code quality of software used in the UK lags its European and American peers in criteria such as security and robustness. No wonder we seem to have more than our fair share of IT glitches in this country across banks, the public sector and now airlines. At the top of the hierarchy, most organisations don’t have board representation for IT departments and there is still a level of apathy towards IT risk. The days when IT was treated as back-office and a cost are long gone but it doesn’t seem to reflect the attitude we still have towards IT in this country. That is not to suggest that the IT mid-management themselves do their businesses any favours because of the lack of objective visibility they provide in their estate. With a majority of the IT systems in their 2nd and 3rd generation of outsourcing contracts, there is very little visibility they have in the underlying risk and security vulnerabilities in their IT estates. This is not a call to argue for reversing the trends of globalisation that has led to offshoring but a call for more objective and predictive Service Level Agreements (SLAs) in the outsource vendor management contracts that monitor and measure improvements in Technical Debt and Complexity rather than rewarding the supplier for just keeping the lights on, delivering cost savings and leaving the Technical Debt as a liability for their successor. With an average CIO tenure of fewer than 2 years, the current attitude is hardly surprising. At the engineer’s level security is an afterthought, developers often think of themselves as ‘artists’, more than programmers that have to follow coding standards and best practices. Spending more IT budget on risk prevention means that there is less to spend on the delivery of technology innovation and a culture of ‘Code now, fix later’. This is a cultural issue, which most managers outside of IT would recognise as one of the toughest to fix. As with many IT decisions, the correct response is to compromise. Making good compromises requires being fully informed of the facts. Obtaining those facts, about the holistic risk level across critical systems is a fundamental starting point. Adopting such a continuous review requires the right analysis, automated by a software analytics platform, such as CAST’s Application Intelligence Platform. Once a clear understanding of software risk is available to management, a mapping of such risks against business priorities allows prioritisation to occur. Once priorities are established, a proactive approach to paying off Technical Debt can be initiated. Prioritisation The complexity of the job at hand of IT execs should not be underestimated. With an average of 5,000 vulnerabilities emerging every year, it’s not an easy task to prioritise and decide which vulnerability to patch. Couple this with the Technical Debt in the vast amounts of bespoke legacy software outsourced creates an extremely difficult situation which is almost impossible to manage. Technical Debt, like the cost to patch systems compromised by WannaCry, is very easy to ignore until it is too late. The solution, a holistic approach to assessing and prioritising known vulnerabilities and violations from the thousands across the IT estate of most organisations, makes far less national press headlines than hospitals shutting down or a teenager accessing personal details of ~154,000 subscribers at Talk Talk by exploiting a SQL injection vulnerability well known the security circles for more than 20 years. The analogy I would draw is that of a person that gets admitted frequently to the hospital. One day it’s flu, another day it’s due to liver issues, some other time due to very high blood pressure, etc. While the doctors treat these symptoms with specific medicines they will perform a full body scan, blood culture, etc. as the patient keeps returning to the hospital so frequently. This is similar to the multiple reasons behind IT outages, these will varying from Cyber hacks where security vulnerabilities are exploited by hackers, power outages, Disaster recovery failures and process breakdowns. But just as we would assess the overall health of the patient and not just treat individual symptoms, we should assess the overall health of the IT estate, Technical Debt, complexity, security, etc. and not just strengthen the external perimeter to prevent hacks or build a more resilient Disaster Recovery process. Only when we tackle these core issues like overall IT complexity will we be able to manage these threats better. ]]> 0 Recap: Software Risk & Innovation Summit 2017 Wed, 03 May 2017 01:16:38 +0000 digital leaders succeed in large part due to their ability to recognize and scale innovation across their business – seeing beyond transformation hurdles and IT complexity. They never lose sight of the end goal. So, what does it take to be a digital leader? As a sponsor of the Software Risk & Innovation Summit last week in New York City, I was able to hear from some of the leading experts on the matter, including CISQ, JetBlue, COACH, Fannie Mae, BCG and others. Under the umbrella of software risk, attendees heard about two major issues: How can IT leaders accelerate DevOps transformation while maintaining software quality? How do digital leaders balance the drive toward innovation and mitigating IT risk? Is DevOps really right for you? A big discussion point during the first panel was “don’t just assume DevOps is right for you because it feels like everyone is doing it.” As articulated by Marc Jones of CISQ: “There’s this assumption that everyone is doing DevOps, but it goes beyond just changing operations. When it comes to core, critical applications, a strong control mechanism needs to be in place.” It’s not always obvious what the criteria for applying DevOps are. Most enterprises today have thousands of applications. “In order to do DevOps successfully, you have to know how DevOps integrates across the entire organization,” said Carroll Moon of Microsoft. You can’t just pick an app and go, you “must be granular about accountability and get specific metrics and monitoring in place” to achieve long-term success, he said. The full-stack engineers that good DevOps requires, and security professionals to control cyber risks are in short supply. This feeds into the need to automate as many of the risk controls as possible into the process. In discussing organizational challenges to DevOps, it was also clear that getting operations teams up to speed is a main hurdle. “More change is needed on the ops side to embed the correct controls,” said Louis Garzon of COACH. “You might be getting safety and risk outcomes, but there can be big ripples on the culture side that prohibit growth and stability.” When it comes to ensuring software quality in DevOps, the consensus from the panel was that a “fail fast” methodology is typically quite helpful. Even though you might be pushing out poor quality code in an incremental release, employing an A/B testing sandbox can minimize risk to the organization. “You can’t wait for quality testing down the line,” said Ramki Ramaswamy of JetBlue. “IT must bring QA and compliance upfront as key priorities.” Doing so will further eliminate risk from fail fast practices, allowing teams to push new functionality to customers without exposing them to undue risks. What risks should digital-first companies address today? Following panel number one, executives from BCG, IBM, Fannie Mae, Bank Hapoalim and Venable took to the stage to discuss how digital leaders can effectively manage risk while driving innovation. “A big part of success here is being able to course correct quickly,” said Benjamin Rehberg of BCG. “You must always be calculating this enterprise risk equation in the back of your mind.” Modern business models have also seen a significant shift of outsourcing to Cloud in the last five or more years. While this can increase the phenomenon of shadow IT, there is also opportunity for digital leaders to benefit from Cloud provider specialization. For example, a breach of AWS would be devastating to thousands of companies, so they take it very seriously and have vastly more resources to dedicate to cybersecurity than, say a typical Fortune 1,000 company. There is also huge value here when it comes to cloud migration and cloud modernization. On the other hand, some financial institutions still consider Cloud a “four letter word.” Regulators are starting to take notice of this outsourcing trend and consider it more heavily. But with startups invading the marketplace, regulators must consider a diverse landscape. “Regulations are complicated, chaotic and changing,” said Don Andrews, Partner at Venable. “It can be hard for startups, fintechs and regional banks to comply with regulations that are really meant for larger institutions. The big question in the immediate future is will government have the courage to scale back regulations to accommodate this trend?” The panel agreed that one of the most important paths to successful and safe innovation is for there to be a strong collaboration between regulators, chief risk officers and technologists. Sometimes it’s the regulators who are the first to see opportunity to simplify the risk environment. An atmosphere that encourages communication between the GRC stakeholders and the IT teams is key. ]]> 0 CISQ Is Helping CIOs Master Digital Transformation Tue, 07 Mar 2017 04:34:06 +0000 published an article detailing the imperative for CIOs to become digital leaders. Research from Gartner confirms that high-performing CIOs are leaders because of their participation in a digital ecosystem. To effectively drive transformational programs, CIOs must have a keen understanding of how digital drives both business and IT success. We are proud to be a sponsor of CISQ’s Software Risk and Innovation Summit on April 27th in New York City, where digital leaders will be discussing the future of digital transformation and DevOps. We hope to see you there! You can register here: In the meantime, have a look at what Mr. Bentz has to say about digital leadership. An excerpt of his article is below. See you in April! === Design is more important than ever. Integration is more important than ever. Architecture is more important than ever. The standards CISQ supports are more important than ever in order to drive transformation while improving IT application quality and reducing cost and risk. Now taking all this and making it easy for the business to consume and agile for IT to deliver? This is where digital leaders are born. At April’s Software Risk and Innovation Summit, we will be talking about this very thing – how CIOs can maximize their digital knowledge and available tools to drive change through IT all the way to the business. Specifically, we are hosting two panels: DevOps and Digital Transformation – how to accelerate DevOps transformation in fast-paced and highly regulated environments The Future of Innovation in IT – how trends in technology, business and policy are influencing IT strategy, thinking and skills The lesson is: don’t delay your growth as a digital leader. With more visibility and opportunity comes more responsibility. Gartner’s 2016 CIO survey showed that IT spending in support of digitalization will increase by 10 percent by 2018 and top performing businesses where digitalization is already “fully baked” into their planning processes and business model will spend 44 percent of their IT budget on digital by 2018. ]]> 0 Are Digital Strategies in Banking Working? Thu, 22 Dec 2016 10:02:31 +0000 TechMarketView round table in London, discussing the effectiveness of digital strategies in banking. It’s no surprise that banks are facing some significant headwinds heading into 2017, including geo-political uncertainties, increased regulation, the need to modernize legacy systems and growing cyber threats. Digital is no longer “just another channel” – it’s essential to success and securing optimal position for the next generation of banking customers. In order to capitalize on opportunities, bank management must establish solid KPIs to create and sustain the right behaviors in a digital environment. Challenges to Digital Success Today, software risk is one of the major sources of profit loss and security exposure for banks. Despite tightening regulations, there are still major technological challenges that legislators have failed to address. Technology risk is less understood by regulatory entities, and system-level risks are most poorly understood at the software layer. As banks continue to build or re-engineer their software assets to meet market demand, they will also introduce new risks into their organization. Within the banking sector, there remains intensifying competition between incumbent major banks, challenger banks and mono-line specialists. The aim of challenger banks is to ensure that the use of alternative banking methods and multiple banking relationships become accepted across the wider market. These banks have an advantage in that they do not have the burden of a universal mandate. Therefore, they can simplify their operations by choosing to decline applicants who would be complex to manage. Banks as a group also find themselves under threat from non-bank digital specialists. Here there remains considerable debate as to whether consumers consider “being a bank” necessary to be entrusted to manage the bulk of a person’s transactions. For example, does Amazon have a trusted enough brand to play the key banking role? Digital pressures are also mounting for corporate and investment banks. In these sub-sectors, client expectations are high posing challenges for bank CIOs and IT managers to modernize and upgrade legacy systems. Digital transformation is critical for this group. No matter the market specialty, operational and technology risk remain chief priorities for banking executives. Cloud Readiness & Legacy Modernization Incumbent banks know that they need to leverage cloud technology to compete with cloud natives in pace of change and innovation. For many, considerable work is required to understand the implications of this method of delivery and enable the building of a satisfactory regulatory framework. There are more significant steps required in the transformation of legacy systems but the move to cloud is a necessary enabler in terms of meeting cost targets and facilitating improvements in user experience. To enable a more smooth transition, CAST has created a Cloud Assessment using industry-proven metrics to ensure stability for key characteristics like structural robustness and efficiency, security vulnerability, architectural compliance and transformation potential. The challenge for the established banks is made much greater by their complex legacy back-ends which need to be able to deliver on the new demands of the digital initiatives. This is particularly true in cloud migration. Simplification of the application “spaghetti” built up over a considerable period of time needs to be done with minimum risk to the ongoing operations of the bank and as cost-effectively as possible. This crucial process requires a deep understanding of the interactions between applications and of the major potential “pain points” in the transformation programme. CAST tackles this complexity with System-Level Analysis, which provides a detailed report on the “health” status of applications in order to build transformation and development strategies. To learn more about CAST, visit You can also read the full version of TechMarketView’s research report, Are the Digital Strategies of the Banks working?, here.   ]]> 0 Why Productivity Measurement Matters Tue, 13 Dec 2016 15:15:59 +0000 Productivity Measurement in the Context of IT Transformation featuring representatives from the retail, banking and insurance industries in the Netherlands. Featured speakers included CISQ, Allianz, BNP Paribas and METRI. Productivity measurement is particularly useful for individuals who lead enterprise governance and measurement programs, in addition to practitioners working on business-critical software. Pragmatic approaches to automated software measurement are more important than ever, especially as the shift to digital continues. As CAST’s Head of Product Development recently wrote in Computer Weekly, the main struggle for IT-intensive organizations as they push toward transformation are the layers of software complexity that have been amassed over years of doing business. Even as new methodologies, like microservices, DevOps and Agile, are adopted to streamline development, instituting automated and standards-based productivity measures is key to success. Software productivity measurement metrics are essential to ensure development teams provide the best value in the shortest amount of time while helping their organizations determine the amount of required input to complete a software project. Standards-based measures, like those supported by CISQ, are essential to define a performance baseline in addition to productivity goals the organization wants to achieve. An important first step in software productivity measurement is to establish a performance baseline and determine improvement goals. Once goals are set, the next step is to determine how teams will deliver functionality using normalized measures. Automated Function Points are a good unit of measure because they are objective, repeatable and can be performed on any application whether it is new or existing. Another important step is to define at which moment measurement should be performed depending on the development methods used. This is where it gets particularly important to use normalized measures in determining functionality delivered by dev teams. Normalized measures will apply regardless of the number of code lines or type of development work completed. As a result, organizations can accurately quantify how much output is provided based on the number of completed business functions. Software productivity measurement is a critical step in the project management lifecycle because it helps organizations to: Identify Current Productivity Levels Assess Delivered Quality Rationalize the Need for Increased Investments Streamline Programming Operations Pinpoint Development Bottlenecks Recognize Underutilized Resources Evaluate Vendor Supplied Value In order to optimize software productivity measurement, it should also account for the organization’s development processes and environment: the number of programming languages, complexity of the application and the type of applications developed. Whether you want to measure the productivity of your development teams, measure the structural quality of your business-critical systems, or prevent software risk from leading to major outages or data corruption, a software analysis and measurement system is no longer an option. For more information about the event and its outcomes, contact us. ]]> 0 Following Best Practices to Achieve Application Security & Reduce Risk Tue, 06 Dec 2016 13:45:05 +0000 Application Intelligence Platform). Over time, your team will be able to establish secure architecture components that should handle all sensitive data. This is a foundational approach to data security and is essential to full security coverage in addition to measuring the security hygiene of development teams and vendors. Instead of hoping that security can be layered on top of weak applications, your development team should be able to demonstrate applications that can be made more secure. Enforcing architectural guidelines can provide a standard of legitimacy for managing vendors as well. Secure application design needs to go far beyond a check-the-box approach of just conforming to minimal regulatory standards. Most organizations don’t even go far enough in compliance to such standards. While there are many software quality metrics that can be measured against a software application, CAST uses a Security/Quality Scorecard based on OMG and CISQ measures for Security, Reliability, Maintainability and Security Debt. Based on observations in the field, the following table suggests potential thresholds: With the rapid evolution of standards, regulations and system complexity, there needs to be a holistic approach to application security. Coding or architectural issues which lead to security vulnerabilities can be some of the most expensive to correct late in the lifecycle. To create a culture of increased security, all parts of the development and stakeholder organizations must be engaged from requirements through operational maintenance. Application portfolios frequently evolve with less attention to inter-application or inter-module discipline where the most critical security flaws occur. The key is to refine metrics that matter to deliver a balanced scorecard reflecting your commitment to secure, low risk applications. Many of the best run companies use a similar set of analytics to reduce software risk, while at the same time reducing costs. The payback is almost immediate. According to the IBM’s Data Breach survey, the average cost of a data breach in 2016 was $4 million. A recent IDC study pegs the average cost of application failure at between $500,000 and $1 million per hour. Not knowing where the weaknesses are is not a valid excuse or successful defense. CAST has put together the tools you need to manage this threat successfully. Together with CISQ, MITRE and leading industry groups, we can help you embed security into your applications. Put us to the test and let us show you how this investment can keep you off the pages of the Wall Street Journal. This is the season for introspection, new resolutions, and finding ways to improve your status quo. Get started with a structural health assessment of your mission-critical apps within 48 hours. ]]> 0 Technical Debt Indexes Provided by Tools: A Preliminary Discussion Tue, 29 Nov 2016 19:05:41 +0000 Technical Debt (TD) indexing as a method to evaluate development projects. We recently presented a paper at MTD 2016, the International Workshop on Managing Technical Debt put on by the Software Engineering Institute at Carnegie Mellon, where we discussed the way five different and widely known tools used to compute Technical Debt Indexes (TDI), for example numbers synthesizing the overall quality and/or TD of an analyzed project. In our analysis, we concentrated on the aspects missing from TDI and outlined if (and how) the indexes consider architectural problems that could have a major impact on architectural debt. The focus on architectural debt is extremely important, since architectural issues have the largest (and most subtle) impact on maintenance costs. In fact, when problems appear at the architectural level, they tend to be widespread throughout the project, making them more difficult to spot and making the project more difficult, tedious, error-prone and ultimately slow. While architecture conformance checking technology is available (and suggested whenever possible), in recent years some knowledge about generally bad architectural practices is consolidating around the term of “architectural smells”. These smells, named as an analogy to the more famous “code smells”, point to a suspect architectural construct and signal trouble in project evolution. Architectural smells are usually detected through the analysis of dependencies. The most famous example, known for many years are Cyclic Dependencies. These make the separation of the components in the cycle impossible, so if they appear in different modules – the modules are actually not separable. Detection of Cyclic Dependencies is supported for example by CAST’s System-Level Analysis and other solutions. Other architectural smells have been defined and are under study, combining dependency analysis with history and evolution analysis to exploit hidden dependencies, or with pattern analysis to identify architecturally relevant elements (e.g., classes) in modules and analyze the impact of their issues on the architecture. In our research, we have already started to gather architectural smell definitions and to implement their detection in a small prototype tool called Arcan (, which currently detects three architectural smells. We are currently in the process of enhancing the implemented techniques through filters and historical information, as explained in our related ICSME 2016 paper, “Automatic Detection of Instability Architectural Smells.” Francesca Arcelli Fontana received her Diploma and PhD degrees in Computer Science at the University of Milano. She worked at University of Salerno and University of Sannio, Faculty of Engineering, as assistant professor in software engineering. She is currently an Associate Professor at the Department of Computer Science of the University of Milano-Bicocca. Her research interests include software evolution, software quality assessment, software maintenance and reverse engineering. She is a member of the IEEE and the IEEE Computer Society. Marco Zanoni received MS and Ph.D. degrees in computer science from the University of Milano-Bicocca. He is a post-doc research fellow of the Department of Informatics, Systems and Communication of the University of Milano-Bicocca. His research interests include software quality assessment, software architecture reconstruction and machine learning. Riccardo Roveda is a PhD student in Computer Science at University of Milano-Bicocca (Italy). He received his bachelor degree in 2011 and master degree in 2014. He is working in the ESSeRE lab at the Department of Informatics, Systems and Communication (DISCo) of University of Milano-Bicocca. He has conducted studies on Architectural Erosion, Technical Debt and correlation between Architectural Smells and Code Smells. Open source addicted. ]]> 0 Legacy Modernization is About Application Security Not Just Cost Thu, 17 Nov 2016 14:20:23 +0000 Yahoo’s apparent cover up of a massive security breach that is damaging its merger with Verizon to the even more recent bank hack in India, where millions of debit cards were compromised, it’s apparent that there are holes in our current defense systems. Adding to the complexity of it all, eWeek has reported that DDoS attacks hit record highs in Q3 2016. For most data-intensive organizations, it would spell disaster if mission-critical or customer information was leaked. What’s more, security gaps are known to go undetected for much longer in enterprises with a high percentage of legacy systems. Many organizations are in the process of digital transformations or cloud migrations to improve operational efficiencies and cut costs. A happy side effect of these modernization efforts is an opportunity to take a good look at application security. Keeping your organization off the front page of the Wall Street Journal requires creating a development culture committed to the reduction of security and quality risks in its mission-critical applications. Despite the visibility of security risk, a surprising number of application developers do not understand how and where vulnerabilities are introduced into the code. Thought leadership groups, like the Consortium for IT Software Quality (CISQ), MITRE, and the Software Engineering Institute (SEI) publish best practices for secure coding, which CAST has embedded into its core products. The Common Weakness Enumeration (CWE) is a list of security-related software vulnerabilities managed by MITRE, the most important of which also form standards such as OWASP, PCI DSS and CISQ. Many of these weaknesses, such as the improper use of programming language constructs, buffer overflows and failures to validate input values can be attributed to poor quality coding and development practices. Improving quality is a necessary condition for addressing software security issues as 70% of the issues listed by MITRE as security weaknesses are also quality defects. Quality is not an esoteric value, it is a measure of risk in the software. Many organizations put QA gates in the software development cycle, however manual inspection only finds about 20% of the defects within individual modules. While this typical 80% miss rate for manual inspection is bad, 50% of security breaches occur at module integration points. Manual inspection misses most of the inter-procedural problems that arise when the system components are integrated. Though workstation code checkers find some of the intra-module defects, almost none of the inter-module defects accounting for 80% of the security breaches are found manually. With only 20% of the code defects uncovered, most of the security flaws will sail through undetected. Unit-level automated code scanners do not detect inter-module flaws either. In practice, 92% of code review findings are unit-level and not dangerous. Only about 8% of findings are inter-module, system-level flaws. These are the issues that need to be addressed during IT modernization projects, prior to fielding the software release, as they are the most serious. For example, the Heartbleed vulnerability defied discovery for 4 years because the flaw was in module interactions. As systems become more complex, the interactions between the systems components introduce numerous points of failure. Both static and dynamic analysis must be part of a quality and security assurance strategy, since detecting many non-functional, structural defects is extremely difficult with traditional testing. Functional testing consumes so much time that little remains for security. Hack testing only finds one way to breach the system. Automated inspection examines all the components for exposure points. CAST helps IT-intensive enterprises though IT modernization projects by illuminating the full-picture view of both application and portfolio-level health. With industry-leading security and risk standards embedded in the Application Intelligence Platform, measuring the Reliability, Security, Performance Efficiency and Maintainability of your software source code is a breeze. Get started with a structural health assessment of your mission-critical apps within 48 hours. ]]> 0