Despite all the hype that software-defined networking and network virtualization generate, the first concern of any network engineer is the physical switching infrastructure.
Yet, software-defined networks (SDNs) and virtual overlay networks make it difficult for IT organizations to navigate the market. Many CIOs have long relied on Gartner's Magic Quadrants to guide their hand through safe buying decisions -- pick a vendor designated by Gartner as a leader, and all will be well. When it comes to data center networks, however, Gartner offers no safe choice. For the second year in a row, the consultancy's Magic Quadrant for data center networking has no leader. Even Cisco, which has shipped more ports than any other vendor in the universe, can't break through. All of this points to an industry in a transition that network engineers must carefully navigate.
SDN confuses apples-to-apples comparison of switching vendors
Most vendors are rapidly pivoting their technologies to address SDN and virtual overlay networks, which can leave a lot of engineers unsure of how to proceed. To add to the confusion, Gartner added VMware -- a company that never has, and probably never will, ship a single physical network port -- to the visionary field of the Magic Quadrant for its NSX overlay software.
Before SDN captured everybody's attention, vendors raced to build the best data center fabric products. Cisco and Brocade embraced Transparent Interconnection of Lots of Links (TRILL); Avaya and Alcatel-Lucent embraced Shortest Path Bridging; Hewlett-Packard embraced both; Juniper Networks built a proprietary fabric, QFabric; and Arista Networks, Dell and others decided leaf-and-spine architectures could do the trick.
These same divisions remain in the industry today, only no one is talking about them as much as they used to.
"When I talk to clients, they don't bring up fabrics very often," said Mark Fabbi, vice president at Gartner and lead author of the Magic Quadrant. "Has fabric become old news? To a certain extent, I don't disagree with that. There have been so many new technologies thrown at data center architects that they really haven't had a chance to absorb everything. Over the last five years, it was network-storage convergence and Fibre Channel over Ethernet, then they got hit with fabrics, then SDN started looming. It's kind of overwhelming if you are on the receiving end of all this stuff. We don't talk about fabrics as much because all the hype is focused on SDN, but fabric is still relevant."
Cisco, meanwhile, is not forgetting about the physical underlay, despite its efforts to sell network engineers on its Application Centric Infrastructure-oriented SDN vision. Cisco still remains focused on accelerating throughput at both the core and access levels.
"The problem we solved in 2008 for our customers was the migration to 10 Gigabit Ethernet [GbE] with the Nexus 7000. The difference now is how to migrate from 10 Gb to 40 Gb," said Soni Jiandani, senior vice president for Cisco. "The reason for that is 10 GbE is going to be on the server motherboard. It started with cloud companies, but this year it is going to get more mainstreamed into the enterprise and commercial marketplace. There will be a transition toward more 40 Gb [in the core] and from 1 Gb to 10 Gb in the server access layer."
Physical data center network remains priority for most companies
Bob McCouch, a network architect with a network consulting firm, said most of his clients are focused on the basics of a good physical fabric. "They want reliable equipment, good support from vendors, and the ability to find talent [to manage the network]," he said. "I have a few on the fringe that are starting to move toward serious DevOps and automation. In those, I can see a little bit of interest in products that are more embracing of APIs [application programming interfaces] for programming or that are able to integrate with OpenStack."
"Those fabrics are definitely important to get right," said Teren Bryson, consulting systems engineer for a global systems integrator. "They are the critical underpinning of everything. The thing about VMware and overlays [is] they rely on what's underneath [them]. You still have to build the basics of switching and routing. The fabric is absolutely critical. It affects the decisions that you make later on."
The Magic Quadrant for data center networking doesn't necessarily offer an apples-to-apples comparison of fabrics because of the influence of SDN and network virtualization. Cisco is the sole challenger in the quadrant. HP, Arista, Juniper and Brocade all have fabrics or fabric-like architectures, but they share visionary status with VMware, which sells only virtual ports.
Those virtual ports, however, are impossible to ignore, which is why SDN and network virtualization are increasingly significant pieces of the puzzle.
"If you look at data center architectures, the first layer of physical aggregation has disappeared into the hypervisor and virtual switches," Fabbi said. "Whether there is money involved [in the sale of those virtual ports] or not, the software players and virtual switches are playing a role in the data center."
Make vendors focus on your data center needs rather than their SDN visions
Whether or not network engineers want to embrace SDN and overlays, the underlying fabric remains essential. Fabbi argued that the choice of technology isn't quite as important as the architectural approach that enterprises adopt. Whether they choose Cisco FabricPath, Brocade VCS, Juniper MetaFabric, HP FlexFabric or Avaya VENA, enterprises need to focus on certain architectural principles, such as flattening the network, simplifying operations and adding as much automation as possible.
Network engineers also need to be savvy enough to understand whether vendors are proposing an architecture that truly addresses their needs. Chassis switches and the scalability and reliability they offer have long dominated the cores and spines of data center networks. The installation of a chassis to which you can add line cards as your network grows has been the conventional wisdom for data center design in the 21st century. But leaf-and-spine architectures using merchant silicon and fixed-form factor switches have presented new possibilities. Data centers can scale out with fixed spine switches rather than add new line cards to an expensive chassis.
"Many of the proposals [that vendors offer for enterprise data centers today] tend to be over-engineered," Fabbi said. "They use what I would consider outdated or old-fashioned principles. Every proposal I see still has a redundant pair of chassis in the core of the data center. And those chassis may have at most one or two line cards in them. You might think, 'Well, why am I buying a chassis that has anywhere from seven useable slots up to 10 or 12 if I'm only going to fill a quarter of it?'"
Many vendors haven't learned to right-size the architectures they offer to network engineers, Fabbi said.
"Fixed form factor switches are much more cost-effective, and in many cases, high-performing, because you're not dealing with multiple ASICs on a chassis with a backplane and so on," Fabbi said. "Most of the vendors have more right-sized solutions and some will certainly lead with them. Dell, Arista and Alcatel-Lucent are much more likely to lead with fixed form factors unless you really do need scale. All vendors can right-size if they choose to."