Manage Learn to apply best practices and optimize your operations.

Hyper-converged infrastructure forcing new thinking for networks

Hyper-converged infrastructure not only drives new thinking about networks, it fundamentally changes some trusted networking techniques.

Once upon a time, I rebuilt a grunting Chevy big block -- cam lifted, ports squashed and over bored with four bolt...

mains, of course. And back then, getting enough juice to the spark plugs to completely ignite a 10.5:1 fuel mix in liter-sized cylinders was challenging with traditional points and coils. But I was lucky, and my 454 was one of the first with Delco's high-energy ignition. It produced plenty of spark.

Standing at the whiteboard the other day, I found myself thinking back to those days, fingers crossed in the garage. I was planning network updates to support a few hyper-converged modules, and I wondered if I'd be similarly lucky enough for my existing network to have enough juice. That led me to ask the following question: Does hyper-converged infrastructure require a rip and replace or can existing networks adapt?

Agnostic, dynamic workload hosting

Hyper-convergence, though hyper-hyped, is simple enough in premise: compute, network and storage in-a-box, conveniently preconfigured by the vendor and (mostly) ready to go. It also promises Lego-like modularity, with adaptable and relatively generic building blocks that reliably bolt together to provide new capacity with minimal fuss. What's missing from the conversation about compute, network and storage however, is the interconnect network.

Ultimately, the future of hyper-converged infrastructure will be determined for the most part by very large stack vendors.

Vendors are, of course, hardware-focused for hyper-converged infrastructure (HCI), with some including reference interconnect architectures and often white-box switching in each module. Based on the lack of conversation, one might think adapting to HCI is simply super-sizing the backhauls of top of rack or end of row. And while that's partially true -- and you should already be thinking beyond 10 GbE regardless -- there's more to it than that. Hyper-converged infrastructure fundamentally changes many of our trusted network techniques. It's not just, "make all the things faster;" problem solved.

More than a Cisco UCS backplane

One great selling point of converged infrastructure (CI) gear like Cisco UCS is bolting compute onto the backplane, making it more versatile. What was once a rack of specialized gear and lots of copper collapses into a configurable and largely invisible utility. We still think of logical configuration of this intra-module resource but are happy to reduce fiddling with physical infrastructure.

However, while the goal of CI is versatility and simplified administration, the goal of HCI is more than convergence 2.0. HCI seeks to deliver modular homogeneous resources for compute, network and storage for a simple reason: Fewer parts and less setup allow managers to cut IT staff. Extreme homogeneity's goal is to drastically reduce labor costs, so it's not safe to assume it provides evermore flexibility.

We certainly see this in the cloud, where preconfigured shipping containers by the hundreds allow Amazon and Microsoft to manage tens of thousands of homogeneous servers with IT staff of just a few dozen in each data center. It works for them because they custom-designed white-box networking and have the huge development resources needed to create any management framework required. That is not the case for enterprises, where HCI must be managed and monitored using the tools provided by the vendor.

Security and management downsides

If you've ever skimmed the PCI specification -- and yes, begrudgingly, I have -- it specifies network segmentation. But with HCI, how do you do that exactly? We can't assume the certification of an existing CC processing system will apply after migrating the workload into HCI. How will you segment in a way auditors can understand? Where does all your control software go if all nodes are heterogeneous? Where does all your monitoring and management software go? It all goes everywhere.

Management will demand an integrated approach that approximates traditional division of network, application and storage monitoring, at least from the perspective of security. Network security with software-defined networking alone is fraught with chills and excitement. The segmented enterprise network of today isn't suitable for a rip and replace of rack systems with HCI; it's going to require some rethinking. While the CIO may appreciate the benefits of flexible workload assignment and reduced IT staff costs, legal won't necessarily be eager to try to explain converged control and data planes, software-defined fabric, and virtual segmentation for class-action discovery.

Vendors can be coy on complexity

There is also a counterintuitive effect on interconnection complexity with hyper-converged infrastructure. Software-defined fabric provides the required flexibility and automation to manage east-west traffic between modules, but increasing network infrastructure complexity runs counter to hyper-convergence's goal to decrease the complexity of everything else. At least in the short term, some specialized enterprise network management functions missing in HCI modules will need to be recreated at the core or aggregation.

And there's reason for enterprises to be wary -- HCI is a great atmosphere for a storm of east-west traffic. Not too long ago, the primary east-west bottleneck was storage, but app admins found a reprieve with solid-state drives, shifting the bottleneck back to the network. Most HCI vendors have solved east-west within their modules, but that does not mean interconnection scales the same way.

Small and medium-sized businesses (SMBs) may actually be earlier adopters of HCI than large enterprises because their requirements are not as complex and they typically aren't as reluctant to the occasional re-engineering if the solution provides real benefit. We see this today in the ongoing adoption of UCS in SMBs. For businesses with complex regulatory requirements, unique application stacks or a more evolutionary approach to infrastructure refresh, HCI adoption may not be as straightforward.

Vendor consolidation drives hyper-convergence

Ultimately, the future of hyper-converged infrastructure will be determined for the most part by very large stack vendors. They may not originate the technology, but they will be the ones delivering hardware to our loading docks. Will Cisco go on an acquisition spree and snap up SimpliVity, Nutanix, Maxta or others to emerge as a one-stop shop hyper-convergence vendor? Will Dell's acquisition of EMC lead to new products supporting massive-scale intra- and inter-module traffic and network services, or will they continue to rely on partners? Time will tell, and I'm ready to see movement past the hyper hype cycle and get HCI to the plateau of productivity.

Next Steps

Hyper-convergence and virtual data centers

Weighing hyper-converged infrastructure products

A buyer's guide to the hyper-converged market

This was last published in January 2016

Dig Deeper on Data Center Network Infrastructure

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you hope to integrate hyper-converged infrastructure with your network?
Cancel

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close