This year’s column will examine the ever changing role of today’s CIO. Last year we touched upon the CIO’s new responsibilities in column five titled, “The Cloud Has Walls: The CIO’s New Responsibilities.” Why has the role of the CIO changed in recent years? About a month ago, I did a presentation with Jones Lang LaSalle’s (JLL) Matt Carolan and his group on Today’s Data Center Real Estate Market. Below are a few facts that JLL generated that I believe help answer this question.

  • 15 Petatbytes of new data created every day (petabyte = 1M gigabyte)
  • 281 petatbytes of information in 1986
  • 471 petabytes in 1993
  • 2,200 petabytes in 2000
  • 65,000 petabytes in 2007 (the equivalent of every person exchanging six newspapers-worth of content per day)
  • 90% of today’s digital data was created in the past two years (source IBM)
  • Over 145 billion emails are sent per day
  • Over 100 hours of new video is uploaded to YouTube every minute
  • 75% of data today is generated by individuals, but enterprises will have some liability for 80% at some point
  • Twenty typical households generate more internet traffic than the entire internet in 2008

As this data shows, the increase in processing has jumped dramatically within the last few years, and the CIO is challenged with constantly deploying new technologies. Their role (five years ago) was typically managing the IT department; it was often driven by the CFO and budget restraints. Today, the CIO is continuously looking at ways to drive technology to make a profit, gain market share, maintain a competitive advantage, and create operating efficiencies.

TCO AND THE NEWCOMER … CLOUD

Typically, over the years, we’ve created a model that evaluates the Total Cost of Ownership (TCO) for build vs. colocation. The formula to develop the study requires multiple disciplines, including engineers, technology specialists, real estate advisors, and construction management professionals for estimating. The services normally include (but are not limited to);

  • Extensive interview process (data center operations, facilities, applications, disaster recovery, finance, change management, real estate, and senior IT management)
  • Site surveys of existing data centers
  • Risk assessment of the existing data centers and new migration
  • Evaluation of existing IT operating systems and interdependencies
  • Data center prototype design
  • Site selection (greenfield and colocation facilities)
  • Colocation RFP development and bidding
  • Construction estimating
  • Financial comparison modeling
  • Migration strategy and detailed planning
  • Operations modeling (new IT support)
  • Score carding and report development

As discussed in my column series in 2013, “Year Long Road Trip — Getting to the Total Cost of Ownership,” the process really did not include the TCO as it relates to the cloud. As cloud has become more popular, the CIO now needs to add this option into the overall equation. In order to do this, there are several variables that must be examined in the process.

  • Since cloud offerings are typically considered an “outsourced” service offering, what are the associated risks of having someone else provide IT processing? This would include shared infrastructure risks, location and latency, SLA’s concerning uptime, disaster recovery, firewall security, age of the primary data center, and other critical infrastructure questions.
  • What applications make sense to migrate to cloud? Evaluation of critical applications verses non-critical applications?
  • Public vs. hybrid cloud and the process to migrate over.
  • Cost of cloud, including usage and peak processing.
  • Due diligence required to create a hybrid cloud will require a lot of upfront consulting concerning applications, migration, and ongoing support.
  • Impact on internal operations and personnel.

When combining the TCO from build vs. colocation with cloud evaluation, the overall study becomes complex and time consuming. Oftentimes during the interview process issues are uncovered that were not inherent prior to the TCO engagement. Additional investigation of applications (dependencies), zombie equipment, and legacy issues often slow the TCO process down, thus increasing the margin of error for the final report.

Once again, a basic question emerges: Do we want to go back to outsourcing as our primary source of IT operations?

In past columns I have addressed the outsourcing issue and various risks associated with someone else managing a data center. In the 1990s, outsourcing became a trend that several financial institutes subscribed to, ending in downtime and other operational issues. The enterprise users that subscribed to outsourcing often found themselves as contract negotiators due to the ever-changing technologies and IT trends.

On Friday, January 9, 2015, Amazon in Ashburn had a fire during construction. While the data center supported the AWS offering, it did not go down and the AWS was not affected. However, this brings up two important issues when considering cloud sourcing:

  • Outside influences which you cannot control may impact operations.
  • How close do you get to really evaluating an outsourcing provider’s infrastructure when subscribing to cloud?

The CIO’s role is ever-changing, and the question of cloud is one of many we will address over the next year. Within the next issue, we will take a deep dive into how the new CIO drives revenue via the technology now being deployed.