CISQ & IT Risk Management: Minimizing Risk in Government IT Acquisition

On March 15, CISQ hosted the Cyber Resilience Summit in Washington, D.C., bringing together nearly 200 IT innovators, standards experts, U.S. Federal Government leaders and attendees from private industry. The CISQ quality measures have been instrumental in guiding software development and IT organization leaders concerned with the overall security, IT risk management and performance of their technology. It was invigorating to be amongst like-minded professionals who see the value in standardizing performance measurement.

IT Leaders Address the Value of Software Measurement & Government Mandates Impacting Development

IT leaders from throughout the federal government discussed the value of how software measurement can positively impact their development process at CAST’s recent Cyber Risk Measurement Workshop in Arlington, VA – just outside of the Washington, D.C. area. The event brought together more than 40 IT leaders from several governmental agencies, including the Department of Defense and Department of State, system integrators and other related organizations. The group shared their experiences in how their respective organizations are driving value to end users and taxpayers.
Measuring and managing software quality is not just about compliance with government mandates, but rather around the proposition that strong software quality, security and sustainability are paramount. However, compliance remains essential. Three primary points around software compliance voiced by attendees were:

Government mandates point to the fact that software must have a measurement component
Industry standards, such as the Consortium for IT Software Quality (CISQ) and The Object Management Group (OMG) are available and should be leveraged
Technology solutions exist to help public sector firms address these mandates

CISQ Hosts IT Risk Management & Cybersecurity Summit

The Consortium for IT Software Quality (CISQ), will host an IT Risk Management and Cybersecurity Summit on March 24 at the OMG Technical Meeting at the Hyatt Regency Hotel in Reston, VA. The CISQ IT Risk Management and Cybersecurity Summit will address issues impacting software quality in the Federal sector, including: Managing Risk in IT Acquisition, Targeting Security Weakness, Complying with Legislative Mandates, Using CISQ Standards to Measure Software Quality, and Agency Implementation Best Practices.

Living Up to Standards

By definition, standards are supposed to be a set of bare minimum requirements for meeting levels of acceptability. In school, the students who took the “standard” level courses were those who were performing “at grade level” and just focused on graduating. Every April in the United States we need to decide whether we will take the “standard deduction” – the bare minimum we can claim for our life’s expenses – or do we have enough to itemize our living expenses and therefore deduce more from our base income before taxes.
In other words, standards are the vanilla ice cream of business requirements.
When it comes to Technology, standards are no different. They still represent baseline requirements for quality. What is different in Technology, however, is that the elements that make up those standards expand and grow beyond the parameters that the original formulators of standards could ever have imagined.
Even Bill Gates is alleged to have said in 1981 that, “640K software is all the memory anybody would ever need on a computer.” He has since vehemently denied ever having said that, but the point is that at the dawn of the computer age in 1981, nobody could have foreseen the need for megabytes of memory as standard on a computer, let alone gigabytes or even terabytes.
But as computer capabilities increase, so do the standards.
Government Standard
There was a time when the government standard for sharing information was really quite simple. Whether you called it “Loose lips sink ships” or just “Keep your mouth shut” it was a low-tech standard for low tech times.
With information sharing hurdling into cyberspace, however, it now takes more than someone’s silence to ensure that information is shared only with those in the need to know. Recognizing this heightened need for standards that fall in line with today’s sharing capabilities, the Department of Homeland Security last month instituted a whole new set of standards for sharing classified information.
According to Nick Hoover in InformationWeek, “The directive names officials who will be responsible for the oversight of classified information sharing, and sets standards for security clearance, physical security, data security, classification management, security training, and contracting. These standards will apply both in government and in the private sector.”
The setting of these standards comes more than a full year after they were supposed to have been put in place (the original deadline had been February 2011) and many months after the Pentagon reported the loss of 24,000 files at a Department of Defense contractor as the result of a cyber attack initiated by a foreign government. Nevertheless, they are in place and are a step in the right direction…but there are other standards that still should be instituted before the government can truly say their IT system is safe.
Standard Bearers
In addition to implementing a set of standards for how information is shared, the government needs to look at implementing a set of standards for the technology behind that information. Optimal software performance – from security to dependability to ease of use – depends upon application software living up to an appropriate set of standards for the day across all facets of software health. Organizations – both public and private – need to ensure that the application software that comprises their IT systems is sound in each of the software health factors, including security, robustness, transferability, changeability and performance.
As we know from the CAST Report on Application Software Health (CRASH) released in December, government applications score the lowest of any industry when it comes to transferability – the ease with which software can be used by another agency, which in government should be a necessity – and about middle-of-the-pack in total quality of its applications. As the country’s largest employer, not to mention one of the world’s largest targets for cyber attacks, the Federal government needs to have higher standard for its application software.
Failing to optimize the overall health of its IT systems may continue to prove costly for the Feds. A set of standards for software health need to be set and the applications within the Fed’s IT system assessed against those standards to identify where issues exist. Failing to at least identify where the problems lie mean the Fed would remain vulnerable to attacks like the one the DoD admitted to last year.
When it comes to meeting standards for application software quality, “good enough for government work” should not and cannot be good enough.
 

Is Your ‘Wastebook’ Overflowing?

Every year, Senator Tom Coburn compiles the Wastebook – A Guide to Some of the Most Wasteful and Low Priority Government Spending of 2011.
According to Senator Coburn in the report:
“This report details 100 of the countless unnecessary, duplicative  or just plain stupid projects spread throughout the federal government  and paid for with your tax dollars this year that highlight the out-of-control and shortsighted spending excesses in Washington.”
The list of $7B in wasteful projects is as amusing as it is disturbing. As a taxpayer, reading a list of this ilk makes our leaders and decision-makers seem detached, self-serving and simply incompetent. Political motivations aside, Coburn’s act of holding a mirror to the process – not to mention to the decision-makers – is a relevant example for us all.
In this economy, none of us can continue to get away with spending money poorly. The rationale behind decisions and priorities isn’t always apparent to those outside the process. However, especially in the business world, we are counting on those people for support and to execute such decisions.
Fact-Based Insight

As reported in Government Computer News, this year’s report includes a number of high-tech projects “involving Twitter, Facebook, video games, podcasts, holographs and other new technologies.”
The ability to support decisions quickly and objectively has always been a portfolio management struggle. Many companies look to enterprise tools to bear this burden, when in reality most tools in the market are spreadsheets on steroids and hollow shells that simply automate poor practices. Instead, organizations should shift their focuses to systems and processes that provide objective rationalization of their portfolios. We usually know such practices by the monikers Application Performance Management (APM) or Performance Prediction Model (PPM).
One of the most obvious examples of when APM or PPM fails is found in the notion of “technical risk”. Many tools offer the ability to view a portfolio in two dimensions: business value and technical risk. The business value component typically applies some science to user-defined inputs concerning an organization’s mission-critical applications. In similar fashion, these tools take the same approach when deriving technical risk. Combining these two semi-subjective indicators provides “insight” for decision-makers.
Taking the Risk Out of Technical Risk
There are many definitions of technical risk including this one from BusinessDictionary.com: “Exposure to loss arising from activities such as design and engineering, manufacturing, technological processes and test procedures.”
Rather than utilizing subjective input on these parameters, wouldn’t employing a more rigorous process that provides objective assessment of technical risk provide far better information?  First, it would reduce the gaming of the portfolio rationalization process. Second, a repeatable and automated process would reduce risk and effort to generate better quality information.
If this capability existed today, I wonder how many projects would make it into the “Wastebook” portfolios of corporate IT departments. Well it does exist in the form of automated analysis and measurement platforms that provide companies with the comprehensive visibility and control needed to achieve significantly more business productivity from complex applications with an objective and repeatable way to measure and improve the application software quality. And not only can they improve application software quality, they can identify areas of technical risk, help monetize them, and provide a basis for comparison with business value to determine a course of action.
I wonder what Government Computer News would have said about the duplicative and shortsighted decisions made by IT departments in the private sector if it were to apply automated analysis and measurement. Would it find the private sector’s “Wastebook” as overflowing as the government’s?

Days of Auld Lang Syne Best Not Be Forgot

Should old acquaintance be forgot, and never brought to mind?
Should old acquaintance be forgot, and days of auld lang syne?
Yes, many of us will find ourselves this weekend sipping champagne and singing the familiar lyrics of this centuries-old tune that has become as synonymous with New Year’s Eve as resolutions and the ball dropping in New York’s Times Square. But in a year when we saw one major outage, malfunction and security breach after another befall organizations that rely upon technology, we should heed a lesson from these verses.
2011: A Tech Odyssey
Easily the best-known tech issue of the year happened at Sony Corporation, which, in a span of four months, was victimized by more than a dozen hack attacks, most of them at the hands of the LulzSec group, and nearly all of them via SQL Injection. In all, Sony saw its hackers gain access to more than 100 million customer data files. So massive were the breaches that even Sony’s insurance company refused to pay off on its losses stemming from 55 class action lawsuits and hits to its operating profits to the tune of $178 million.
But Sony while Sony suffered the largest security breach, it was not the largest company to own up to problems with its technology. As usual, that spot was reserved for Microsoft.
First came April’s Patch Tuesday, which saw Microsoft release a record-tying 17 bulletins that patched a record 64 vulnerabilities, 15 more than the previous largest-ever set in October 2010. Equally significant to the total were the “critical bulletins,” which included security patches affecting Windows XP, Vista and Windows 7, and at least some of which affected the kernel. The second was November’s Patch Tuesday, which witnessed Microsoft craftily avoiding an attempt to patch a zero-day vulnerability used in the Duqu malware attacks that allowed hackers to run arbitrary code in kernel mode. Microsoft instead offered a work-around for the issue.
Microsoft and Sony were far from alone this year. Game maker SEGA and security vendor RSA were also among those who were dinged by hack attacks. Meanwhile, software malfunctions, vulnerabilities and product recalls due to software structural issues befell Dropbox, Google and even Apple this year.
Financial Disarray
Financial organizations were hit hard this year. On the morning of February 25, we learned the London Stock Exchange had been forced to halt trading on its main market due to a technical fault in its barely two-week-old MilleniumIT trading system.  Despite having been tested prior to its implementation, the reason for the failure of the relatively new system was “algorithms.”
But the LSE was not alone. That same month, Nasdaq OMX Group confirmed that its servers had been breached, and suspicious files found on servers associated with the Web-based collaboration and communications tool for senior executives and board members to share confidential information. Over the succeeding two weeks, Euronext, Borsa Italiana (bought by the LSE in 2007) and the Australian Stock Exchange had all suffered outages due to technical flaws. A few days later, Bank of America joined the exchanges when it suffered a major outage in its ATM network. And lest anyone think lightning does not strike twice, later in the year Bank of America’s website was taken down by a “denial of service” attack.
And the issues did not end there. The Financial world had barely caught its breath over the flurry of outages in the first quarter when Chase and JP Morgan warned customers of a potential breach of security. They were followed shortly thereafter in May by Citigroup, which announced its North American cards division had fallen victim to hackers who had finagled access to names and information of more than 200,000 customers.
Man at the Top
Application failures at the government level were perhaps this year’s biggest surprise. The surprise was not because of their existence – the U.S. government is known to be one of the biggest targets for hackers worldwide and is even targeted by foreign governments – but because of the number to which they owned up.
The most serious among them was a data breach at a Pentagon defense contractor in which 24,000 sensitive files were stolen by a hacker backed by an unidentified foreign power. The attack, which took place in March, was not revealed until July and even then only under the auspices of announcing a plan to better prepare the government and the military for cyber terrorism.
This revelation was followed later in the year by a breach at Pacific Northwest National Laboratory, a Department of Energy contractor, over Independence Day weekend and then news of the Air Force’s new drone being infected by a computer virus. And these were just a few of the literally tens of thousands of cyber attacks the government has had to fend off in the past year.
…And the Beat Goes On
I would like to say the list ends there, but there were also airlines, railways, medical devices and health care institutions that met with application failures that led to malfunctions and outages this year. Heck, they even found a software vulnerability that could allow someone to shut off a person’s pacemaker!
The sad part about nearly all, if not all of these outages, malfunctions and breaches is that their roots were located not with an outside source – no, the outside source was merely the catalyst for disaster. All of these issues had in common a structural flaw somewhere down in the bowels of the application that had gone undetected.
Perhaps it was part of legacy code that once was valid, but after multiple generations is no longer a solid piece of work. Or maybe it was just a badly written piece of code that got passed over because it didn’t bother anything in the testing phases.
Whatever the reason this structural quality error happened, it shouldn’t have. Hopefully in 2012, companies will look back on all the problems in 2011 and realize that they need to increase the structural analysis of their application software to ensure they won’t be the next Sony, SEGA, Citigroup, et al.
Should old acquaintance be forgot and never brought to mind? HELL NO!
For as Sir Edmund Burke once said, “Those who do not learn from history are condemned to repeat it.”

CAST Defends the Defenders

From the Pentagon to the Department of Energy, government organizations have been hard hit this year by IT systems outages, performance issues and security failures, most of which have stemmed from structural quality issues. But as bleak as this may sound, the good news is that these problems seem to have served as a wake-up call.
The Department of Homeland Security has already taken steps to begin addressing software structural quality issues by acknowledging they exist and bringing in IT leaders who can help them spot issues and fix them. Similarly, the U.S. Air Force announced in October that it had certified CAST’s Application Intelligence Platform (AIP) to review its systems and applications and detect structural quality issues.
Now joining the effort to combat structural quality problems that assault performance and security of IT systems, the United States Army Program Executive Office for Enterprise Information Systems (PEO EIS) has also recruited CAST’s AIP. PEO EIS provides infrastructure and information management systems to the U.S. Army. It develops, acquires and deploys tactical and management information technology systems and products, while supporting the critical enterprise-wide Army ERP, Financial and Logistics systems that enable everything from supply provisioning to personnel management of troops.
The PEO EIS plans to use AIP to optimize enterprise application performance and stability and to manage the structural quality of critical software systems proactively. In addition to optimizing the performance of mission-critical application software used by the U.S. Army, CAST AIP will also augment the program’s delivery governance, drive down maintenance costs and increase the reactivity of the Army’s applications to new requirements.