Perhaps one of the more notable evolutionary developments in technology is converged infrastructure. Now, truly, it’s not really new. Before the days of massively virtualized data centers, everything ran on a converged infrastructure: local disks, shared memory, shared processors, and inter-process communications.

So what’s different now? Well, yeah, okay, everything is different now! But let’s take a look at how that is, why infrastructures became siloed, and what convergence brings to the table.

In the Beginning

In the beginning there were servers, and servers had processors, memory, and disk storage, and only that server used those resources. In an ideal world, only one service or application ran on a server, so it was pretty straightforward to allocate and manage those resources. Some less fortunate organizations had more than one service/app running on a server, so sometimes they had to deal with the challenge of shared resources and resource contention.

Virtualization

Then came the greatness of virtualization, and suddenly everybody had to deal with shared resources. The biggest problem with shared resources, though, is that some resources take a lot of physical space, whereas others significantly less. Over the past ten years, we’ve been able to cram exponentially more CPU cycles and RAM into a single server, and even more disk capacity, but one of the most notable challenges has been getting sufficient disk performance in a single chassis.

So, along came the storage area network (SAN), with dozens of disk spindles. This made disk performance less of a bottleneck, moving most servers from shared 320mbit/sec Small Computer System Interface (SCSI) to shared gigabit Fibre Channel. But SANs present another major challenge: They are expensive. Generally speaking, it costs more to connect a host to a SAN than it does to fully populate that host with the best available direct attached storage (DAS), so only the neediest of systems get access to the SAN.

Networking has similar challenges. The more virtual machines (VM) you put on a host, the more network bandwidth is required to get those individual VMs connected to the switch at the top of the rack. Adding more network cards is a common solution, but there is also finite capacity in a chassis due to the fixed number of card slots available.

Convergence

The software defined data center (SDDC) promises to change all of this. Much of the networking traffic will move inside the hypervisor, reducing the traffic that moves across physical connections. SDDC allows us to share internal spindles across multiple nodes of a file services cluster, effectively creating a pseudo-SAN using DAS, but with significantly less cost and potentially better throughput. In the best of Fibre Channel SANs, you’ll have dozens of spindles pushing data across a shared 16 gbit/sec connection. Using DAS in a file services cluster, each spindle can have 6 gbit/sec of dedicated throughput.

Anti-Convergence

However, this capability may not be the right solution for every need. Allow me to regress us back to the days of consumer stereo. One of the notable benefits of component stereo systems was the ability to purchase best of breed for each module: the best amplifier, the best turntable, and the best speakers —and rarely were they built by the same company.

But this best of breed solution also had complications. First, there was a lot of research involved in making purchasing decisions. Also, somebody had to connect everything together correctly. The good news, of course, is that everything had standardized interfaces, so there was never a question of will ‘A’ work with ‘B’ and ‘C’. Composite stereo systems (all-in-ones) met the need at the other end of that spectrum — nothing to connect (except the plug to the power), minimal research to do (most people bought what they could afford), and it was working within a few minutes of unboxing it.

But here’s the biggest problem people encountered with all-in-one stereo systems — speakers blew out, amplifiers got zapped by power surges, and turntable belts broke , leaving you with a partially, if at all, functioning stereo system. But more critically, to get it fixed, you had to take the entire unit to the repair shop.

Back to converged infrastructure. You may be purchasing an all-in-one box designed to provide a hundred VMs instantaneously out of the box, but the vendor you’re buying from is merely an aggregator of hardware and software sourced from other manufacturers — manufacturers they’ve chosen, possibly the ones they got the best pricing from. What happens when something breaks? What happens when it just works poorly? Who reconciles issues between the disk subsystem, the compute subsystem, and the software? Unlike the all-in-one stereo, these are probably questions that can be resolved prior to purchase, but they do need to be asked.

Converged infrastructure may be a great solution for certain scenarios, but I’m intrigued by the thought that, just maybe, some IT pros are not quite ready to give up the best of breed approach to building a data center. Perhaps more significantly, it’s not just one person making the decision. You’ve got SysAdmins, NetAdmins, DBAs, Storage Admins, and Virtualization Admins, all who will have a voice in the matter, and a vested interest in preserving their job.

What about you? Would you sacrifice the best of the best for some convenience? Is it worth unboxing a single chassis to get a hundred VMs all at once to avoid the days (or weeks) of effort it might take to build out a four-node hypervisor cluster and a hundred spindle SAN? Do you think a committee of siloed IT pros can converge their mindset to buy a single vendor solution for the entire data center?

The Need to Manage Remains

Whichever approach you determine is right for you, you need to remember that thorough management and monitoring of infrastructure is key to long-term success. Some converged infrastructures come pre-packaged with management software as part of the convergence, but you have to ask yourself, “Is it enough?” In the case of best of breed data centers, even the best fall down sometimes. Proper management and monitoring means you’re not only aware of problems, but are able to prevent them before they become catastrophes, all the while fine tuning for peak performance.