Hyperconverged Infrastructure Versus Cloud Strategies
HCI environments are more than the myths make them out to be
IT infrastructures are changing at a rapid pace. As technology professionals contend with many of the challenges inherent to hybrid IT environments, they may also find themselves faced with whether to adopt a hyperconverged infrastructure (HCI) or cloud as an enabler of modernized environments. This alternative approach to public or private cloud deployments combines compute, storage, and networking in a single system and can come from a single vendor or be hardware agnostic.
A couple of key developments have made HCI more appealing for more workloads. One is the ability to independently scale compute and storage capacity via a disaggregated model. The other is the ability to create a hyperconverged solution using NVMe — an open logical device interface specification for accessing nonvolatile storage media attached via a PCI Express bus — over fabrics.
When IT departments look to switch to a virtualized environment, it seems they are almost always talking about cloud. However, both HCI and cloud have their respective benefits and drawbacks. HCI, for example, allows businesses to model existing applications in a VM or containerized environment that is still on-premises and doesn’t require tech pros to get caught up in the specifics of cloud providers (and that includes on-prem hybrid offerings like Azure Stack and AWS Outposts). HCI also provides tech pros with a way to manage parts of the infrastructure that are not cloud appropriate. On the other hand, one of the primary downsides of HCI is that it adds an additional layer of IT platforming for teams to manage on top of on-prem elements, like networking and storage, as well as any cloud applications, which is where most executives are pushing their IT teams to migrate data.
Meanwhile, the cloud is elastic and only requires the storage and compute that is immediately necessary but can be much more expensive than other options. It also goes to mention that developer teams working on cloud environments are not conscious of other environments, and security measures are not built in — they would be an additional expense for the business.
All that said, organizations weighing their options between HCI and cloud may also be faced with misleading myths about HCI and where it might fit into their current IT strategy.
We’ve taken the liberty of breaking down some of these myths to give tech pros a better idea of what it means to work in a HCI environment and to clear up some of the fear, uncertainty, and doubt that comes along with the decision-making process.
- HCI costs more than building your own virtual infrastructure
This harkens back to the concept that creating your own infrastructure could be considered free if your time is worth nothing. The most talented tech pros do not inherently know how to build their own HCI, so they will have to spend time learning how to do it and will likely do it wrong a couple of times at a cost to the business. But, even if the components and licensing are cheaper upfront, building your own infrastructure could be significantly more expensive. Whether you are taking additional time to build it out or hiring an experienced (read: expensive) professional to build it for you, building your own virtual infrastructure could deplete valuable resources. And when all is said and done, maintaining the infrastructure can be just as expensive.
- HCI is just software-defined storage with a hypervisor
This is virtually the same thing as a storage area network (SAN). But what about networking? What about scalability and clustering? When you adopt the right HCI, this is all built into the platform as an interconnected offering. A true HCI implementation handles those tasks (and more) with minimal interaction from the owner/user.
- HCI doesn’t work for the entire “enterprise to edge-computing” spectrum
Despite popular belief, HCI can be just as cost-effective as other virtualized environments. If you built it yourself, understand that home-grown/custom-built, vendor-specific attribute (VSA) solutions use significantly more resources than a purpose-built HCI solution. Those VSA solutions are simply not economical for edge computing. True HCIs can be far more resource-friendly and, therefore, allow edge computing to be a reality. One of the risks of self-building is that the majority of IT teams may only build it for what would work in the middle of the environment — not to the point that it would also work at the edge.
- HCI is a bad idea because it’s a single-vendor solution
Maybe some may want to diversify their investment portfolios, but it is not a common opinion among data center managers that it’s absolutely necessary to use a multi-vendor approach. In fact, an entire core from one vendor is able to interoperate much more efficiently than piecemeal infrastructure assemblies.
HCI is expected to grow by $24.56 billion over the next four years, demonstrating the extent to which organizations continue to adopt this architecture in tandem with or as an alternative (or perhaps a stepping stone) to the cloud. But, regardless of how an organization chooses to move forward with cloud or HCI, one element must not be overlooked: monitoring.
You need to ensure you’re generating full-stack visibility of these architectures to ensure performance and availability for users. When an organization owns an infrastructure, like with HCI, they’re responsible for repairing anything that breaks — meaning they’re on the hook for maintaining the health of the system. And, although a key benefit of the cloud is the transition of management to “someone else,” the concept of “trust but verify” still holds true. Tech pros should still look to leverage tools that provide visibility beyond the firewall and provide insight into the state of workloads in the cloud.
This is even more critical when you consider that your environment on Day 100 is dramatically different than it was on Day 1: The work you do is going to shift. Comprehensive monitoring is the best way to track health and performance baselines and, ultimately, empower tech pros to successfully optimize their infrastructure — and that goes for both on-premises HCI and workloads that live in the public cloud.