Software-Defined Data Centers: Hype, Reality, And What's Next
The software-defined data center is either an overused buzzword for a sector filled with tire kickers, or the breakout trend of 2016, depending on which industry experts are quoted. Yet IT managers know there’s a profound transition taking place as control of data center gear shifts from hardware to software.
The latest data show they’re allocating budgets accordingly. According to an April 2016 survey, 66% of CIOs plan to expand their use of software-defined data center technologies this year.1 Spending for software-defined data centers is forecast to increase 14% in 2016, although present deployments represent just 21% of data centers surveyed in early 2016.2 For many enterprises, it’s not an option; Gartner estimated that by 2020 75% of organizations will need to implement a software-defined data center3 in order to support the DevOps approach and hybrid clouds they need as part of agile digital business initiatives.
First Servers, Then Networks
Thanks to virtualization technologies debuting over a decade ago, server and networking domains are already well on their way to software-defined control. By 2013, 51% of servers were virtualized,4 and today in 2016 server virtualization rates exceed 75% in many organizations.5 Before long, software-defined network virtualization from Cisco, Juniper Networks, Barracuda, and more also took hold, such that a Gartner report cited in a May 2016 trade publication forecasts that 10% of customer appliances will be virtualized by 2017, up from 1% this year.6
For the data center, virtualization meant that gear-filled glass houses where one could practically get a tan from the heat of servers, switches, and spinning disks didn’t need so much hardware. Less hardware meant less costly square footage, electricity for operation and cooling, and capital outlays that were depreciated across five years. While staff expertise required for operating the data center didn’t go away, it shifted to more valuable activities because their prior tasks became easier as management interfaces improved.
Next Up: Storage
The next logical evolution of the software-defined approach is data storage — traditionally a big-iron, big price tag dominated sector, and one that’s poised to deliver tremendous improvements. Research & Markets estimated the software-defined storage market as totaling $1.4 billion in 2014,7 growing at about 34% annually through 2019 — thought just a fraction of the overall $36 billion storage market that year.8
The delayed embrace has a reason. SAN and NAS equipment has historically depended on custom-made ASICs,9 custom-made circuit boards, and custom-made real-time operating systems. The cost of developing that custom hardware and software, and testing to assure interoperability, kept prices high and constrained the easy roll out of features as well as easy manageability on site. Endusers, in an effort to protect their most critical asset — their data — relied heavily on these big-iron solutions and were cautious to move to new unproven platforms.
The emergence of cloud storage achieved early adoption for departmental applications, DevOps, and testing needs. Yet the “serious” IT work for applications requiring high-availability, high performance, high IOPs, or low latency had to remain on-premise on traditional gear — which still required:
Overprovisioning — an upfront purchase of far more than needed today
Difficult forecaseting — future planning to know how much to buy five years out
Management — a guru or two on site to run, maintain, and upgrade.
Enter OpEx-based software-defined storage with its new way of building, managing, and buying storage. These solutions are built on high-volume, industry-standard hardware, and open-operating systems that cost less, yet typically provide the same types of storage, protocol support, and types of storage support users have expected from traditional CapEx-based hardware-driven approaches.
Deliverables Not Possible Before
Like what the new software-defined servers and networking feature sets did for those sectors, software-defined storage solutions have reset the standard for what IT teams should expect from any kind of data storage.
Agility — defined as the speed at which the underlying resource can be changed. With older CapEx-based storage, when a storage administrator needed to expand capacity, it often took weeks or months to negotiate with a vendor, place an order, receive the equipment, and install and deploy the storage array. Particularly with newer software-defined approaches enabling storage-as-a-service, it typically takes minutes to expand capacity.
Elasticity — IT managers don’t have a crystal ball to predict what they’ll need in the next month, let alone the next five years — and they no longer want to be penalized in the form of overspending for storage resources they don’t need. With software-defined approaches, IT teams can scale resources both up and down quickly via a remote management interface. They can remain always aligned with a changing world.
Scalability — Traditional SAN and NAS architectures have physical hardware limitations in terms of scalability. In contrast, software-defined storage is vastly scalable to hundreds of thousands of nodes.
Multi-tenancy — In the typical SAN and NAS storage configuration, since the devices are limited in their potential for scalability, they usually fulfill multiple purposes, and there is no reliable separation among the workloads. For example, accounting’s fiscal year-end close applications would run on the same gear as a development’s new software build. To ameliorate the above performance issue, IT managers invest in multiple storage arrays and use physical separation to segregate the workloads. This runs up costs, and unused space is simply wasted. It’s also a harder configuration to administer because IT teams have to manage a number of storage arrays, often from different vendors and with different management consoles.
In contrast, a software-defined storage system is designed for multi-tenancy. Users can relocate applications to unused storage to make the most out of the available resources. If the solution is a cloud storage offering, the software allows IT to track the actual costs incurred by each application so that departmental enduser groups can be billed by application and by department, not just by their entire storage usage. Note that the more advanced storage-as-a-service offerings offer resource isolation, enabling a single-tenant experience in a multi-tenant environment — or “the best of both worlds.” In this case, while the applications are multi-tenant in that they share common resource pools, they are allocated their own resources. This eliminates both the chance of a performance problem caused by a “noisy neighbor,” and also security concerns over mixing application data on common drives.
The industry is at the front end of an exciting transformation to software-defined data centers where IT teams are freed from being hardware-bound and benefitting from business model and feature improvements that have raised the bar on price, performance, and flexibility.
Transformations in server and network virtualization have proven the path of what is now accelerating in data storage. Organizations can use this time to benefit from a substantial early-mover advantage.