‘Gate Closings’ Before Gimmicks

With all of the security issues appearing in the press these days, I’m often reminded of a conversation I had with John Kilroy, the former CIO at Cape Cod Hospital. At the time I was doing media relations work for a company in the Health Care IT industry and was working with Kilroy, who has been retired for the last five years, on an article for one of the publications that covers that space.

The big issue back then was the Health Insurance Portability and Accountability Act, better known as HIPAA. The underlying security issues behind HIPAA are very similar to those being faced by every organization that keeps its data online today – keeping that information from being exposed to unauthorized persons.

Back then, everyone from Congress to the HCIT vendors to hospitals were looking for a technology fix to the problem. At the time, electronic medical records were in their relative infancy and were bashed by some as being a security risk…although those in the HCIT field would argue that keeping patient files behind layers of firewalls and encryption is far more secure than keeping them in a physical rack on the desk of woefully undermanned nursing stations.

The article Kilroy and I were writing was about making electronic health systems ready for HIPAA. Something Kilroy said during our initial conversation about the article, however, still stand out to this day. He said, “Even more than making sure information doesn’t leak out of electronic health systems, because they’re usually secure, is getting our staff and the patients themselves not to reveal information in public places like the hallway and elevator.”

This conversation was definitely top of mind when I read a brief article penned for InformationWeek by Jonathan Feldman, director of IT Services for the city of Asheville, North Carolina, about how the training of personnel was far more important than the technology being used.

Kowtowing to Security Needs

Feldman laments in his opening paragraph:

IT pros tend to focus solely on technology to solve endpoint security problems. After all, if malicious software is the poison, it’s logical to look to signatures, heuristics, and cutting-edge detection for the antidote. But that’s a mistake. Human vulnerabilities–ignorance, inattention, gullibility–are just as exploitable as software vulnerabilities, if not more so.

Feldman follows this by offering a checklist of exercises he believes companies should conduct with their employees to ensure they understand not only the importance of keeping proprietary information proprietary, but also how to detect when they are being plied for information that should not get out. These steps involve:

  1. Conducting a phishing drill
  2. Hitting Employees with In-Your-Face Propaganda
  3. Letting employees know there’s something in it for them
  4. Make it fun and personal through group meetings to discuss the issue
  5. Being supportive of employees about their due-diligence efforts
  6. Get the execs to talk to employees about how important their diligence is

While Feldman’s methods are a bit “gimmicky,” I do not disagree that getting employees on board and practicing less risky behavior in terms of data leakage is important. Teaching employees how to detect phishing emails and how not to be duped into revealing passcodes is not only important to an organization, it is vital to its survival. However, saying that the human vulnerabilities are more important than the technological ones may be overlooking one fact – that humans are fallable.

Human fallibility means that no matter how much teaching about security measures a company conducts, one day something will slip through the cracks. If the structural quality of the applications targeted do not receive equal attention, that slip could lead to a costly data breach – and it only takes one breach to cost a company millions of dollars or worse, its reputation.

Moo-ving in a Secure Direction

One thing most security breaches have in common is that they were the result of some structural quality defect in an existing application that served as a point of vulnerability. If that vulnerability doesn’t exist, then it stands to reason that attacks that slip through the “human” layer of defense will likely be thwarted. But just how can vulnerabilities in application software be detected?

Evaluating an application for its structural quality defects is critical since these defects are difficult to detect through standard testing; yet these are the defects most likely to cause security breaches by unauthorized users.

A company that truly wants to address a security breach should first analyze the applications in its IT system to find which of them has issues with structural quality that could have been the breaching point (or points). Most companies historically have avoided doing this because all they had available to them was either manual testing – which is grossly inefficient in terms of accuracy, cost and time investment – or comprehensive analysis platforms that only large enterprises could afford.

Today, however, there are automated analysis and measurement tools on the market offered via Software as a Service (SaaS), which makes it cost-effective and much easier to find application vulnerabilities in an IT system. In fact, with these SaaS versions of automated analysis and measurement on the market, finding vulnerabilities should probably be bumped in priority to BEFORE a breach happens.

After all, failing to find vulnerabilities before breaches is akin to closing the gates after all the cows have gone.

Get Your Free White Paper And Learn How Software Analysis Can Help Your Business

“The Science of IT Planning and Budgeting” helps senior leaders challenge the status quo and to understand how a research-based, automated solution transforms this important process by injecting it with fact-based, objective insight.

Your Information will be kept private and secure.

Comments

comments