Five Things To Do Now To Enable A Software-Defined Data Center Future
A growing number of organizations are evaluating and implementing software-defined data center management (SDDC) capabilities to increase agility and utilization. The technology is advancing quickly and software-defined management represents the future for most organizations. Gartner predicts that by 2020 “the programmatic capabilities of an SDDC will be considered a requirement for 75% of Global 2000 enterprises that seek to implement a DevOps approach and a hybrid cloud model.”
Like every major change in the data center, the transition to SDDC is more evolution than revolution. As such, the decisions made today will either facilitate or serve as obstacles to that evolution, regardless of whether the organization is already on the path to SDDC or just beginning to evaluate its potential.
One of the challenges of software-defined management is the complexity of today’s data center. The data center is too complex to be managed in a single-layer approach. It requires, at minimum, a four-layer framework that encompasses device communication, data collection, system management, and centralized management.
The effectiveness of this layered approach depends as much on what happens between the layers as what happens within each layer. When the right data is collected and moved purposefully up the stack, the capabilities of the layer above it are enhanced. This ultimately provides the top of the stack, typically represented by a data center infrastructure management (DCIM) suite, the visibility into devices and systems — and their interdependencies — required to support SDDC.
Here are five things you can do to simplify the evolution to SDDC:
• Develop a roadmap for transitioning to the Redfish specification. Redfish will play an important role in unlocking data currently trapped in the device communications framework and making it accessible to collection engines.
Redfish is a new specification for out-of-band server management released through the distributed management task force (DMTF) in August 2015. It was developed to address the limitations of the current intelligent platform management interface (IPMI) specification. IMPI wasn’t designed to meet the challenges of today’s data centers and as a result isn’t well suited to support industry trends in scale-out computing, data-driven management, and automation.
Redfish addresses the limitations of IPMI through a purposeful representational state transfer (REST) and JSON-based design that is open, lightweight, scalable, and easy to maintain and automate. Redfish will be familiar to developers used to working with modern web applications that typically leverage REST and JSON. The use of these formats to communicate between layers provides a familiar, open architecture that allows applications to take advantage of new capabilities, such as automation, while leveraging skill sets that already exist within the IT organization.
We are already seeing server products entering the market with Redfish support and the availability of Redfish-enabled servers will ramp-up throughout this year. Most new servers will support Redfish in some form by the Intel Purley platform launch in 2017. Equally exciting is the specification’s potential to extend beyond server management. It is a powerful and elegant approach to communication that will ultimately be supported by data center power and thermal management equipment, creating a common language for all data center systems and thus simplifying software-defined management.
• Support local data aggregation. The problem of gaining visibility and control over data center devices required to support software-defined management has never been a lack of operating data. The devices in the data center create a wealth — perhaps even an overabundance — of data through the sensors and controls embedded in servers, switches, storage systems, and infrastructure devices.
The challenge has always been accessing and consolidating that data in a way that enables effective management. That challenge is addressed through the data collection layer, which provides the capability to consolidate, translate, and filter data so the management system receives only meaningful data in a form it can use.
Management gateways, capable of collecting data from multiple types of devices —often within a rack or collection of racks — translating it and delivering it through an API to the application framework, will become a vital link between devices and the management suites that support automaton. These systems will soon be adding Redfish translation support to allow them to aggregate data from IPMI-enabled equipment, and other protocols like SNMP and legacy BMS systems, with data from newer Redfish-enabled systems to allow operators to continue to use their legacy tools as they transition to the newer Redfish approach and RESTful APIs.
• Enable intelligent system-level management. Intelligent system controllers complement management gateways by providing data collection for the systems they support, while also optimizing the performance of those systems.
Intelligent system-level management operates in the middle layer of the data center management stack, optimizing infrastructure performance locally while delivering critical data to the higher-level software managers in a secure way. A good example is data center thermal management, which can account for anywhere from 15% to 40% of data center energy costs, depending on the approach and equipment used.
Intelligent thermal management systems use inputs from multiple sensors across the data center to precisely control environmental conditions and maximize the role of energy-saving technologies such as economizers. This can reduce thermal energy costs by as much as 30% while adding the level of visibility and control necessary to ensure thermal systems respond fluidly to automated load shifts initiated by the SDDC manager.
• Adopt scalable data center infrastructure management (DCIM). The ability to automatically shift loads based on changing conditions — the heart of the SDDC value proposition — creates opportunities for data center managers to increase asset utilization, minimize downtime, and reduce capital expenditures. However, it is important to remember that when loads are transferred, the systems they are being shifted to must be supported by an infrastructure that has the capacity to handle it. Just because servers have capacity doesn’t necessarily mean the power and cooling systems supporting them do. Unless the interdependence of this infrastructure with the servers, switches, and storage it supports is recognized, automation could have unintended consequences.
DCIM has emerged as a solution for organizations seeking to gain enterprise-wide visibility into device location, utilization, and performance. This data directly impacts decisions about what equipment can accept new loads without increasing risk, further enhancing automated decision making. As DCIM systems continue to evolve they will expand the data available to the SDDC to include factors such as current energy costs and external environmental conditions, which can also influence decisions on where and when to shift loads.
This is only possible when the DCIM platform is open to the management framework below. Some DCIM systems have added open APIs to simplify this integration. A DCIM system that integrates open APIs can provide the real-time visibility into resource utilization, available capacity, and costs required for informed decision making by SDDC software.
• Identify pilot programs for automating decision making. In addition to the technical barriers that must be overcome to enable effective software-defined management, organizations must begin to build the skills and test the processes required for automation. Before implementing SDDC across a facility or enterprise, it may prove wise to create a pilot project using a non-critical application or service and only one plane of the SDDC environment, such as storage. This allows IT staff to gain hands-on experience with automation with minimal risk to the organization.
As the team gets comfortable with the technology and refines the processes to support automation, the pilot can be expanded to include additional planes and higher-profile services until the organization is confident enough to implement SDDC across a facility or enterprise.
The industry is moving closer to wide-scale adoption of SDDC capabilities. SDDC offers significant benefits in both cost and agility and IT organizations that are prepared to adopt SDDC when the time is right will increase their value to their business. Preparing for Redfish, implementing local data collection and system management, adopting open management systems, and experimenting with SDDC on non-critical services will all help you capitalize on this exciting technology when the time is right.