Is Application Security Risk a Result of Outsourcing?

There’s a common belief in the software development space that when companies choose application outsourcing of their projects, the control they relinquish by doing so results in lower application quality and puts their projects at risk. Once again, however, CAST’s biennial CRASH Report, which reviews the structural quality of business critical applications, has disproved this theory.

Making Software Quality the First Measure of Software Security

If you read the news these days, one would think that software security is something that is layered on top of existing software systems. The truth is, however, that software security needs to be woven into the very fabric of every system and this begins with eliminating vulnerabilities by measuring software quality as the system is built.
During the CAST Software Quality Fall Users Group, Dr. Carol Woody, PhD, senior member of the technical staff at the Software Engineering Institute (SEI) at Carnegie Mellon University, whose research focuses on cyber security engineering, discussed the importance of software quality as a basis for security.

Closing the Back Door thru Code Analysis

Have you performed code analysis on your software recently? If not, you are in good company as many companies are failing to do the one thing that could improve their software security – making sure the software isn’t vulnerable to an attack to begin with.

CISQ Hosts IT Risk Management & Cybersecurity Summit

The Consortium for IT Software Quality (CISQ), will host an IT Risk Management and Cybersecurity Summit on March 24 at the OMG Technical Meeting at the Hyatt Regency Hotel in Reston, VA. The CISQ IT Risk Management and Cybersecurity Summit will address issues impacting software quality in the Federal sector, including: Managing Risk in IT Acquisition, Targeting Security Weakness, Complying with Legislative Mandates, Using CISQ Standards to Measure Software Quality, and Agency Implementation Best Practices.

6 Root Causes for Software Security Failures and How to Fix Them

Whether you move from an on-premise platform to a mobile device or a virtual cloud environment, security has always been the biggest concern. It’s no more shocking to hear about big banks, financial institutes, and large organizations shutting down their business or coming to a standstill due to an unexpected system crash, a security breach, or a virus attack.
Security outages are observed on all platforms. And it is becoming more and more challenging to detect and prevent such malicious intruders from getting into our complex multi-tier systems.

‘Gate Closings’ Before Gimmicks

With all of the security issues appearing in the press these days, I’m often reminded of a conversation I had with John Kilroy, the former CIO at Cape Cod Hospital. At the time I was doing media relations work for a company in the Health Care IT industry and was working with Kilroy, who has been retired for the last five years, on an article for one of the publications that covers that space.
The big issue back then was the Health Insurance Portability and Accountability Act, better known as HIPAA. The underlying security issues behind HIPAA are very similar to those being faced by every organization that keeps its data online today – keeping that information from being exposed to unauthorized persons.
Back then, everyone from Congress to the HCIT vendors to hospitals were looking for a technology fix to the problem. At the time, electronic medical records were in their relative infancy and were bashed by some as being a security risk…although those in the HCIT field would argue that keeping patient files behind layers of firewalls and encryption is far more secure than keeping them in a physical rack on the desk of woefully undermanned nursing stations.
The article Kilroy and I were writing was about making electronic health systems ready for HIPAA. Something Kilroy said during our initial conversation about the article, however, still stand out to this day. He said, “Even more than making sure information doesn’t leak out of electronic health systems, because they’re usually secure, is getting our staff and the patients themselves not to reveal information in public places like the hallway and elevator.”
This conversation was definitely top of mind when I read a brief article penned for InformationWeek by Jonathan Feldman, director of IT Services for the city of Asheville, North Carolina, about how the training of personnel was far more important than the technology being used.
Kowtowing to Security Needs
Feldman laments in his opening paragraph:
IT pros tend to focus solely on technology to solve endpoint security problems. After all, if malicious software is the poison, it’s logical to look to signatures, heuristics, and cutting-edge detection for the antidote. But that’s a mistake. Human vulnerabilities–ignorance, inattention, gullibility–are just as exploitable as software vulnerabilities, if not more so.
Feldman follows this by offering a checklist of exercises he believes companies should conduct with their employees to ensure they understand not only the importance of keeping proprietary information proprietary, but also how to detect when they are being plied for information that should not get out. These steps involve:

Conducting a phishing drill
Hitting Employees with In-Your-Face Propaganda
Letting employees know there’s something in it for them
Make it fun and personal through group meetings to discuss the issue
Being supportive of employees about their due-diligence efforts
Get the execs to talk to employees about how important their diligence is

While Feldman’s methods are a bit “gimmicky,” I do not disagree that getting employees on board and practicing less risky behavior in terms of data leakage is important. Teaching employees how to detect phishing emails and how not to be duped into revealing passcodes is not only important to an organization, it is vital to its survival. However, saying that the human vulnerabilities are more important than the technological ones may be overlooking one fact – that humans are fallable.
Human fallibility means that no matter how much teaching about security measures a company conducts, one day something will slip through the cracks. If the structural quality of the applications targeted do not receive equal attention, that slip could lead to a costly data breach – and it only takes one breach to cost a company millions of dollars or worse, its reputation.
Moo-ving in a Secure Direction
One thing most security breaches have in common is that they were the result of some structural quality defect in an existing application that served as a point of vulnerability. If that vulnerability doesn’t exist, then it stands to reason that attacks that slip through the “human” layer of defense will likely be thwarted. But just how can vulnerabilities in application software be detected?
Evaluating an application for its structural quality defects is critical since these defects are difficult to detect through standard testing; yet these are the defects most likely to cause security breaches by unauthorized users.
A company that truly wants to address a security breach should first analyze the applications in its IT system to find which of them has issues with structural quality that could have been the breaching point (or points). Most companies historically have avoided doing this because all they had available to them was either manual testing – which is grossly inefficient in terms of accuracy, cost and time investment – or comprehensive analysis platforms that only large enterprises could afford.
Today, however, there are automated analysis and measurement tools on the market offered via Software as a Service (SaaS), which makes it cost-effective and much easier to find application vulnerabilities in an IT system. In fact, with these SaaS versions of automated analysis and measurement on the market, finding vulnerabilities should probably be bumped in priority to BEFORE a breach happens.
After all, failing to find vulnerabilities before breaches is akin to closing the gates after all the cows have gone.

Fix a Hole, Stop a Bug

After a very mild winter this year, the Northeast part of the country found itself stuck in a prolonged “early spring” where it seemed like but for a couple of days temperatures refused to warm up from the 40’s and 50’s. We seemed to be stuck in the ether between “actual cold” and “comfy warm” for quite a while until the past week or so.
When finally the temperatures turned upwards into the 60’s and 70’s, I happily threw open all my windows in the house to “air the place out.” Apparently, though, the insect population of my neighborhood seemed to be waiting for this moment as well and took my open windows as an invitation to breach the many holes that have somehow sprung up in my screens over the years. On the bright side, I got a good bit of exercise chasing after flying critters with my fly swatter (oh, how I long for the days when environmentalists didn’t guilt us out of using that magical red can of bug spray).
As I went about my business swatting flying insects that had infiltrated my house, I noted a sort of irony in what I was doing. After many posts about fixing issues (i.e., holes) with application software to avoid bugs breaching a company’s infrastructure, I realized that I had failed to heed my own advice. So I did to my screens what any good software developer should do when he realizes there are holes in his software – I fixed them. I haven’t seen a flying critter in the last three days despite my windows all being open.
There’s a Hole in the Bucket
Fixing holes might seem like a logical first line of preventative measures to prevent an outside entity from breaching a company’s software portfolio with bugs…or viruses or other manner of malware. Why is it, then, that many who claim to be “in the know” immediately jump to measures that either help identify a breach but not prevent it, or take a shotgun rather than flyswatter the problem?
In a recent post over at Dark Reading, John Sawyer writes yet another column about what companies need to do to prevent data leaks in their organization. Most of his solutions are necessary, although predictable – encryption, locking down the network and employee education. All of these are necessary elements to any good security system, but as so many security discussions do they leave out one significant element – eliminate the point of infiltration.
Encryption is not infallible. As for locking down a network, it’s not only pretty drastic, but if the attack comes from within the organization, a locked down network is about as effective as the infamous Maginot Line that was supposed to have protected France from the Germans in World War II. (For those not into History like I am, the Germans just went around the thing).
Employee education is extremely important, though, and I do credit Sawyer for bringing it up. So much of what trouble’s an organization’s IT portfolio is introduced by its own employees, both knowingly and unknowingly. Nevertheless, if there were no issues with the application software to be breached, data would not be leaked.
So Fix It
Like so many issues with the IT portfolios of today’s companies, problems with application performance – including security breaches – can be traced back to the structural quality of the software. What needs to happen, therefore, is for companies to make themselves more aware of the issues that exist within their portfolios.
To ensure that they are protected at the very core of their IT portfolios, companies need to perform thorough assessments of the applications in them. This base protection platform should start with automated analysis and measurement when software is being either built or customized to ensure that at each stage of the build issues are caught and dealt with before they become problems. As studies have shown, with each successive stage of the build process, these issues become 10 times more difficult to mitigate than if caught in the previous stage.
However, just assessing applications during builds ignores the fact that structural quality standards are constantly increasing. What was sufficient for keeping data secure just a few years ago may no longer be good enough to keep it locked down. This is why companies also should perform periodic analysis of the entire portfolio to identify those applications that no longer measure up to the current standards for optimal structural quality.
So when companies are starting to consider all the things they need to do to secure their IT portfolios, they should think of their homes and remember, the fewer holes there are, the fewer bugs there will be.