Every CIO wants to achieve the software-defined data center (SDDC). Organizations across the globe are looking to modify their compute, storage, and networking infrastructures, trying to make all aspects of their data center infrastructure more software-defined — with the ultimate goals of increasing agility, flexibility, and efficiency. As the concept of “software-defined” becomes increasingly prevalent, the need to simplify the process, while still achieving the SDDC nirvana, is critical.

The majority of organizations realize the potential operational benefits from adopting software-defined technologies in one’s IT infrastructure, however, they are missing a key ingredient. The SDDC is not achieved by simply bolting together virtualization, software-defined networking (SDN), and software-defined storage (SDS). While these components are important, a true SDDC is not something that can be bought off the shelf, but rather is an operational state achieved by adopting a new way of managing and controlling all the moving parts within the infrastructure.

Adding to these misconceptions, many see achieving the SDDC as a daunting and expensive task that requires virtualizing all the resources within the data center. According to a study by EMA, the most pressing IT challenges associated with achieving a SDDC are centralized management, repeatable configuration of software and hardware infrastructure, policy-driven provisioning and application placement, and automation and orchestration of application deployments. However, empowering the SDDC is easier than one might think, once the tactics below are followed.

VIEW INFRASTRUCTURE IN TERMS OF CAPABILITIES

In previous generations of hosting infrastructure, the servers, storage, and networks were viewed as the “independent variable,” and many hosting decisions were a function of what hardware was “on the floor.” In software-defined infrastructure this is no longer the case, and the capabilities of the infrastructure can be a function of what workload demands require. This is a radical shift that must underpin any shift to software defined infrastructure.

To make this shift, infrastructure must be thought of as providing a certain set of capabilities, such as performance characteristics, availability, compliance, licensed software, etc. Where a workload is hosted is dependent on these capabilities, as well as the amount of spare resources available. In software-defined infrastructure, infrastructure can provide a range of capabilities, and what resources are available is much more complex, as traditional physical and virtual boundaries start to come down. A hosting environment can often be viewed as one big pool of capacity, even if it spans physical boundaries, and the services it offers to the applications it hosts are entirely dependent on what the application needs.

IDENTIFY BEST IN CLASS ANALYTICS

With a more modern view of infrastructure, having the ability to establish policy-based management is the next step for any software-defined initiative. Because there are more moving parts and more “permutations and combinations” in the ways applications and infrastructure can interact, organizations cannot rely on human decisions when analytics can guarantee the compliance, security, and cost efficiency you need within the data center.

By understanding the purpose of a workload and establishing policies on how its needs should be met, organizations can accurately know:

• What the infrastructure should look like (e.g., be fit for purpose)

• How much infrastructure is required (now and into the future)

• Where workloads should go, and how resources should be allocated to them

IT teams operating and building private clouds need to look for a control plane that can answer such questions using intelligent analytics, as opposed to spreadsheets and best guesses. Without having the precision and speed that come from this approach, it is not possible to achieve automation and control the balance of supply and demand within a private cloud.

UNDERSTAND DEMAND

IT must have insight into the needs of existing applications, as well as those due to enter or leave the infrastructure, in order to plan ahead and make better use of infrastructure resources.

Picture a large conference facility typically used for tradeshows and conferences, where moveable walls and flexible seating provide almost infinitely configurable layouts. If organizers are unaware of what kinds of events will be taking place (industry trade show, wedding, etc.), or how many will attend each event, it is impossible to know how big individual rooms should be, what kinds of equipment will be required, or services needed such as electrical, catering, etc. In order to run a profitable operation and keep customers happy, it is critical that the upcoming demand be understood and managed to the highest degree possible, and if demands are not easily understood it makes it even more critical to prepare for the unknown. Agility should not come from panic-driven shuffling of customers, but proactively considering all potential scenarios and providing “whitespace” to absorb any last-minute activity.

Because IT organizations also have the same efficiency and customer satisfaction goals, they too require a proper model of upcoming demand, both confirmed and likely, so they can determine the most efficient method to provide infrastructures that satisfy the requirements. By not understanding the demand, users may end up with very powerful, expensive infrastructures that are not fully leveraged because they cannot be aligned with the demands of the application. As a result, capacity will be stranded and customers will not get the service they need, which is a lose-lose scenario.

By aligning supply and demand, the SDCC becomes greater than the sum of its software-defined parts. Organizations can use software-defined controls to configure, specify, and match infrastructure supply to meet current and anticipated future requirements. While knowing your demand today is great, what will really give you a competitive edge is the ability to predict your demand for tomorrow. The alternative is rampant over-provisioning, which drives up the unit cost of hosting workloads, something that is no longer acceptable in the cloud era.

DRIVE AUTOMATION

Determining what resources are required is becoming increasingly complex, and there is a lack of intelligence guiding most automation today. Traditional capacity management tooling is inadequate in a world where the infrastructure is programmable and application demand is stacked on shared infrastructure. Most organizations achieve success in automating isolated parts of the operational process, such as the act of provisioning a new VM, but still require experts with spreadsheets for higher-level processes, such as selecting the hosting environment a workload should go into.

Accurate, detailed models of workload demands, fine-grained control over supply, and policies are required to bring these isolated parts together in a way that makes sense. The move toward software-defined is invariably coupled to the move to higher levels of automation, and a new type of policy-based control system is required to get there.

While software-defined infrastructure brings a new level of complexity, it is controllable through sophisticated analytics and purpose-built control software. The upside is that proper analytics not only tames this complexity, but it also brings new levels of efficiency that were not possible in previous generations of infrastructure. By precisely aligning hardware and software resources with the workloads they are serving, infrastructure can rapidly adapt to meet changing application needs, and at the same efficiency and service levels increase. This win-win scenario is the true goal of SDDC.