The desire to consolidate data center networks onto a single platform is as old as data center networks themselves....
Over the past 30 years, that consolidation has been fueled by the deployment of Ethernet and IP which, after displacing other technologies for general networking, eventually moved into other data center (DC) network operations, including management, storage connectivity and high-performance computing interconnects.
Beyond the capital and operational efficiencies that come with having only one kind of network to run, and using the most commoditized technology to power it, getting to a single fabric promotes agility --and agility is the IT team's current watchword. More than anything else, IT needs to know how to buy data center switches to become more agile, giving businesses the capacity to follow new opportunities and respond to new challenges. Complex and special-purpose networking tend to straitjacket an infrastructure and make it less agile.
Data center switching performance is measured in two key ways: port speed and latency. The migration to Ethernet storage and the implementation of data center virtualization mean each network connection must do more -- handle more flows and handle more data volume on those flows. As a result, data center switches must support very high port speeds -- and in large data centers with lots of virtual machine hosts, form factors densely packed with high-speed ports.
Latency is a growing consideration as IT negotiates a shift from traffic flows that are predominantly north/south (server to user) to ones that are primarily east/west (server to server). East/west traffic flows place a premium on low-latency switching -- transactions that, behind the scenes, bounce around multiple servers (whether they be six, 16 or 60) before producing results. This requires network topologies that create minimal latencies to avoid application performance issues.
The drive to lower latency leads to network flattening. A few years ago the typical system was three-tiered: Edge switches fed into aggregation switches, which in turn fed into core switches. Now the data center needs to move to two-layer or even single-layer networks. For example, a mesh is a one-layer network, with every switch connected to every other; the group is managed as a single virtual switch. In a mesh, every server or storage node is at most one hop from every other node -- they are either on the same edge switch or are on switches connected directly to each other. Meshes depend on high port counts, since many switch ports are dedicated to interconnecting the switches.
A leaf-spine architecture creates two layers: leaf switches, to which servers and storage connect, and spine switches, to which leaf switches connect. Every leaf node connects to each spine node. This means every node is at most two hops from every other: Either they share a leaf or each can send traffic through its leaf to a spine node and on to the other's leaf. Hybrids and variations abound: For example, a mesh can hang as a leaf off of a leaf-spine arrangement; or spine nodes might have some actual compute or storage hanging off them directly as well as interconnecting leaves (called a spline architecture).
While everyone is dialing up the speed on their switches, not everyone has embraced the 100 Gb standard, or provided it across a number of form factors. Similarly, low latency to some vendors means 5 microseconds (msec) of delay; to others it is less than 1 msec. Depending on the data center and its associated solutions architectures, such distinctions can be the difference between success and failure for a new service.
Management and complexity
The number of tools and platforms (component operating systems) required to deploy a data center is a significant point of differentiation among vendors. The greater the number, the more complex and difficult ongoing operations will be. The ideal is a unified platform with a single management tool able to create and manage with minimal effort whichever architecture (three-tier, leaf-spine, spline, mesh) is most appropriate to the data center's needs.
Vendors differ also on whether they support the ability to perform maintenance without shutting a switch down. In the always-on data center, any feature that allows maintenance without downtime is a valuable benefit.
Key features: Mixing physical and virtual, hard and soft
A key feature of the next DC network is the integration of virtual and physical networks and, by extension, network hardware resources into software-defined networks (SDN) and private-cloud management stacks. Switches can, for example, support integration of virtual networks to VLANs by unifying resource partitioning across virtual and physical domains. By bridging virtual to physical networks, switches can play a critical role in improving virtual network performance and overall network security. The virtualization of the network is helping IT run more like a cloud, internally; therefore, switching support for integration into cloud management frameworks like OpenStack is another important goal for new data center switching.
Data center switches buyer's checklist
In summary, then, the key considerations for those evaluating new DC switching gear should include
- 100 G Options
- Range of switch form factors with 100 G ports
- Maximum number of 100 G ports per rack unit
- Sub-Microsecond Options
- Range of switch form factors with sub-microsecond throughput
- Maximum number of 100 G ports per rack unit
- Cloud, SDN and Virtualization Readiness
- Support for OpenFlow (the higher the version, the better)
- Certification of compliance with the highest version for which a testing suite is defined (typically several versions behind the current)
- Vendor OF controller, and/or third-party and/or open source OF controllers certified with its hardware
- OpenStack plugin
- Physical switch support for VXLAN and/or NVGRE
- Platform Unity, Manageability, Maintainability
- Single operating system for all data center switches
- Single management tool addressing whole DC switching portfolio and all features
- Support for in-service software upgrades, so switches don't have to be shut down for upgrades
SDN is emerging as the next great transition in the network, the culmination of the drive to separate network architecture from cabling. SDN allows every network connection to provide any network service, because services will exist separately from the switches that provide data-plane access to the network. To properly prepare for the migration to SDN, enterprises now evaluating which switches to buy should ensure their devices support the OpenFlow (OF) standard for switch-to-controller communications.
The bottom line
In a very real sense, data center network architects and engineers are buying the future when they buy their next DC switches -- at least their data center's future. The decisions they make, defining requirements and assessing options, will define the options available to them without requiring a substantial investment later. So they should make these choices with attention, not just to the crises of the present moment and the easily anticipated projects of the next year, but also with an eye to the two- to five-year future, which is of course much hazier. The wise ones will hedge their bets by investing in equipment that allows higher densities at higher speeds and with lower latencies, but which also allow easier management through platform simplification. They'll also be sure to keep in mind the unification of physical, virtual, traditional and SDN.
About the author:
John Burke is a principal research analyst with Nemertes Research, where he advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics. John's expertise is in virtual networks and software-defined networking (SDN) technologies, standards and implementations.
Leaf-and-spine architecture designs are gaining popularity over tree designs. Learn why.
Is software-defined networking right for your enterprise?
Cloud computing, flash storage, virtualization and software-defined networking are driving the modernization of data centers.
Five strategies for upgrading your data center.
The pros and cons of choosing a hybrid data center.