High-capacity network bandwidth has become more widely available, and we have quickly tapped into every last inch of its capacity. More devices are built with wi-fi capabilities, the costs of mobile devices are going down and smartphones are in the hands of more people than ever before. In fact, Apple might have already exhausted the market and is seeing drastically lower sales forecasts for the iPhone.
We are moving into an era in which virtually any device will connect to the Internet. Phones, fitness trackers, dishwashers, televisions, espresso machines, home security systems, cars. The list goes on. Analyst firm Gartner estimates that over 20 billion connectable devices will exist worldwide by 2020. Welcome to IoT—the Internet of Things. A giant network of connectable things.
In April, Google experienced a fairly significant cloud outage, but it was hardly news at all. In fact, it was likely the most widespread outage to hit a major public cloud to-date. The lack of coverage is strange, considering the industry’s watchful eyes like Brian Krebs and others. The even more recent Salesforce service outage seems to have received more attention. But despite the fact that Google seems to have gotten away with a “pass” this time, the glitch brings renewed attention to the fact that tech players large and small are continuing to deal with software robustness issues.
Google Compute Engine was down for a full 18 minutes around the 7 o’clock hour Pacific Time on April 11, disconnecting all users in all regions. This was a Google cloud outage, and the root cause was a network failure. Network outages appear to be an ongoing challenge for Google, this one being the biggest yet.
On March 15, CISQ hosted the Cyber Resilience Summit in Washington, D.C., bringing together nearly 200 IT innovators, standards experts, U.S. Federal Government leaders and attendees from private industry. The CISQ quality measures have been instrumental in guiding software development and IT organization leaders concerned with the overall security, IT risk management and performance of their technology. It was invigorating to be amongst like-minded professionals who see the value in standardizing performance measurement.
IT leaders from throughout the federal government discussed the value of how software measurement can positively impact their development process at CAST’s recent Cyber Risk Measurement Workshop in Arlington, VA – just outside of the Washington, D.C. area. The event brought together more than 40 IT leaders from several governmental agencies, including the Department of Defense and Department of State, system integrators and other related organizations. The group shared their experiences in how their respective organizations are driving value to end users and taxpayers.
Measuring and managing software quality is not just about compliance with government mandates, but rather around the proposition that strong software quality, security and sustainability are paramount. However, compliance remains essential. Three primary points around software compliance voiced by attendees were:
Software risks to the business, specifically Application Resiliency, headline a recent executive roundtable hosted by CAST and sponsored by IBM Italy, ZeroUno and the Boston Consulting Group. European IT executives from the financial services industry assembled to debate the importance of mitigating software risks to their business.
All businesses recognize the importance of developing software within a budget. But how do you put together that IT budget in the first place? CAST has worked with a successful CIO to create a guideline of best practices (>Click Here To Download It<). Saad Ayub, formerly CIO at Scholastic and The Hartford, suggests nine ways analytics supports better IT budgets.