The point of network infrastructure is simplification. Hyper-converged infrastructure is intended to make IT pros’ lives easier as it provides pools of compute and storage resources to enterprise workloads — ideally in a true private cloud.
Use of HCI is still climbing in the enterprise, moving beyond limited use cases — such as dedicated clusters replacing storage arrays or virtual desktop infrastructure farms — into more general-purpose computing, especially in private cloud environments. HCI is far from the dominant paradigm, however, as it represents only a small fraction of overall data center infrastructure in the average enterprise and is often relegated to branch office and nascent edge computing initiatives outside existing data centers.
True HCI collapses the compute and storage hardware and the required connecting networking for those components into a single system, with the associated management tools and interfaces wrapped around the hardware. The internal network can be anything from standard copper Ethernet to InfiniBand to passive optical meshes.
IT is only responsible for basic management of the internal network via the HCI software, not for provisioning it. However, IT is responsible for provisioning the data center network that ties the HCI into the rest of the infrastructure, and it must do so with HCI’s specific needs in mind.
Software-only HCI enables a bring-your-own approach for hardware. While this approach removes some of HCI’s simplification benefits, it does enable reuse of existing resources rather than requiring their replacement. IT must provision the network interconnects among the components in this case, as well as the links to the rest of the infrastructure.
The key concerns for interconnecting links in a software HCI are capacity and latency. IT wants enough capacity — most likely multiple bonded 1 Gbps links or a 10 Gbps link — with ultralow latency among the components to maximize the performance of the HCI as a unit.
How to connect HCI and networking
Capacity and resilience are, as always, the key concerns for connecting any type of HCI architecture to the rest of the infrastructure. As a rule, an HCI node will need higher-capacity links — and more of them — than a typical server. Whether the node is general-purpose compute or specialized to a single task, at a minimum, it is likely to require four high-capacity Ethernet links: two pairs of two running on separate network interface cards (NICs). Each pair will consist of a storage-focused link and a compute-focused link, and it must be able to meet minimum performance requirements working alone.
The exact capacity required will vary with the configuration and purposes of the HCI. At a minimum, IT should plan for 4x1G links, although 4x10G are becoming more common and 4x25G links are on the rise. Where single links at the desired capacities are not readily available, IT may need to resort to bonding multiple links into a set, which drives up the number of required switch ports and NIC sockets.
For resilience, each pair of links will connect to a different leaf or edge switch, which IT should configure to enable simultaneous use — with multichassis link aggregation, for example. As IT organizations decide to adopt HCI and expand their roles, the need for continuous availability will only increase, making redundant attachment to the data center fabric essential.
Though there is no mystery or magic to it, HCI networking does require attention to its specific role and requirements, and IT should not assume it will be as simple as plugging in a couple of cables. With the right connections, though, HCI can deliver on its promises of simplification and reduced management overhead for IT.