New network architectures and traditional components: Why both are key

The next stage in network architecture's evolution is upon us. An SDN future is certain. What's less clear is how quickly it will become the norm, and what network architects should do in the meantime to keep their systems healthy and operational.

This article can also be found in the Premium Editorial Download: Network Evolution: Does SDN change everything in network architectures?:

When Joe Rogers sits with his team to map out the University of South Florida's 2014 networking strategy this month, there will be a lot more at stake than whether to upgrade to the newest router. For Rogers, senior network engineer at the university, everything is on the table—and evaluating whether next-stage network architectures such as software-defined networking (SDN) will fit into USF's future infrastructure is just one piece of the puzzle.

"The biggest challenge is how quickly everything is changing," said Rogers. "There is no consensus within the industry, and certainly no standardization. You talk to one vendor and get their solution; talk to another one and you get another one."

"It's almost funny," said Mike Fratto, principal analyst, enterprise networking and data center technology, at Sterling, Va.-based Current Analysis. "In some ways there is way too much information and in others not enough. There is a lot of hype around SDN, which is good if your job is to follow all that stuff. But if you are designing networks, not so much. Compounding the problem is this: A lot of these technologies are so new that the benefits aren't always evident, and I bring this up because a lot of enterprises consume their new technology through value-added resellers, and the VARs aren't up to speed with what's going on with their chosen vendors."

Networks are always in flux, but this evolution is different

Of course, change—in and of itself—is nothing new for networking pros. The history of networking is underscored by the myriad evolutions that have burnished the industry—from mainframes to processors and from asynchronous transfer mode to IP. Yet SDN—and its associated management components—is a development that represents an utter transformation of conventional networking. At the same time, the underlying physical network—with its collection of legacy protocols and legacy challenges—is not disappearing. For Rogers, and his peers at organizations large and small, making sure their core networks are up to date even as they assess the potential impact of next-stage networking is mission No. 1.

"There have been a lot of rapid developments in other areas, like wireless, but those are all standards-based," thus easing the migration, Rogers said. SDN's standards, such as they are, remain in flux, providing another level of uncertainty. "I'm not going to deploy an app that doesn't have standards."

Rogers has the funding to upgrade the school's existing three data centers, which are anchored by Cisco Catalyst 6500 switches. He's examining all of the latest devices, including Cisco's Nexus 7000 and 9000 series, to determine what will best serve USF's needs in the coming years. A key requirement: the ability to migrate virtual machines between data centers and keep the same IP address space.

"That's something we can't do today, but it is something we can do with future technologies, whether they are SDN or [Cisco's] LISP (Location Identifier/Separation Protocol), or via multiprotocol label switching (MPLS), so we are trying to find the best way to provide that," Rogers said. " I still don't think any vendor has really stepped up to answer the question of how to take the workload and move it at Layer 2 between two data centers and have that traffic go efficiently."

A robust Layer 2/Layer 3 foundation, in fact, will remain a key ingredient of USF's future infrastructure regardless of Rogers' decision—whether that includes VMWare's NSX, Cisco's Application-Centric Infrastructure (ACI) or some other next-stage topology.

"You need that reliable infrastructure for the NSX tunnel to be supported," Rogers said, adding that he believes that the Transparent Interconnection of Lots of Links (TRILL) protocol may be the standard that will enable the scaling up of bandwidth necessary to support future network designs.

Overlays can't replace core network upgrades

For his part, Seattle-area network architect Teren Bryson said curiosity about virtual overlays and tunnels can't displace the need among enterprises to shore up their core networks. "Unless you are trying to solve a specific problem, SDN might factor into the equation, but it doesn't necessarily sway the buying decision." Case in point: switches that support OpenFlow. Although the protocol is being touted as a standard through which SDN networks can be programmed independently, Bryson said there is no guarantee OpenFlow will ultimately prevail—particularly as vendors jockey for position. "If I were going to put a lot of money down on new core switches today, I'm not sure there is anything out there that would cause me to move from one platform to another," he said. "It's cool to look at, but I don't think this will be the cycle [in which SDN-centric decisions will be made]."

That said, Bryson is holding off on core upgrades—for now. A long-time Cisco customer, Bryson said he will be upgrading some of his existing Catalyst 6500 switches into the Nexus line, but Cisco's November unveiling of its Nexus 9000 and ACI strategy has delayed his purchasing decision until he gets more information.

"If I were able to build a data center from scratch, my decision would be completely different because I wouldn't have to leverage the millions of dollars in existing equipment that I have.

"So we will take a look at all of it, and when it comes to what we will do, it will be definitely incremental, with some dabbling at the edges with SDN. As far as production and purchasing decisions, we are not quite there yet. But no matter how much virtualization goes on top, you still need to have gear [to interact with it] on the underside."

Michael Martin, a New York-based global infrastructure architect for security and networks, said the bigger issue for next-stage network design may hinge upon apps with the ability to multiplex and diversify how they communicate. Application models must be able to take advantage of multithreading and multi-transactional processing, he explained. "This is where you can see, at the application layer, making the most efficient use of capacity. As an engineer, I hear every day, ‘Why is network so slow?' It's not slow; it's the application."

A delicate transition to new network architectures

Upgrading networks is a very delicate transition, said Dave Chandler, practice lead for enterprise networks at St. Louis-based systems integrator World Wide Technology Inc. (WWT). "I think the industry is getting into a phase now where people understand that their networks—as they are—will be around for a very long time, but what they can do is investigate some of the low-hanging fruit potential of SDN."

To clients with questions about where to invest in their core networks, Chandler said he ultimately advises them to sit tight.

"I believe we are looking at two things right now," he said. "The initial push was development of southbound APIs, so there was a lot of talk about OpenFlow and vendor-specific APIs. Now I think the second shift is over the programmatic access to the network, and now we don't care so much how we talk on the southbound API from the controller. There is much more interest in the controller and the applications the controller can run."

To that end, legacy protocols such as command-line interface and simple network management protocol may have to bridge the gap between legacy networking and SDN. "The simple legacy protocols that I'm using today"—that is, protocols that will enable administrators to continue to use their existing gear and without having to upgrade software—"I can continue to use … but more in a controller-operation model," Chandler said.

As the industry moves forward and sees a better or richer development of APIs or OpenFlow, organizations can then upgrade but remain grounded in a controller-operation model.

Network administrators will be able to take a page from cloud's evolution to help them map their migration paths, said Chandler.

"We will begin to build out, maybe a network management tool or some sort of programmatic access tool, that will permit us to use our old mechanisms and then move them into this new process over time. It won't be a big bang. Cloud wasn't a big bang."

Likewise, overlay networks and approaches that rely on traditional legacy protocols to serve as southbound APIs out of the controller will ensure that legacy networks continue to thrive.

"The message is to continue to invest in existing infrastructure, and at WWT we are still training and demonstrating the networks we have been training and demonstrating for years," Chandler said, adding that where a next-stage implementation might make sense, say, for a network tap, WWT is showing its clients how an SDN-flavored tap would differ from a conventional device. "It's doing the same function, but a different capability because it's coming from a controller," Chandler said. "This is how we can help the customer decide where is the right place to continue legacy investments and where to move into SDN, at least initially."

This was first published in February 2014

Dig deeper on Data Center Network Infrastructure

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close