Dell Inc. may still be known by most enterprises as a hardware supplier, but the company has spent about $12 billion during the past 3-5 years on acquisitions in pursuit of a dramatic makeover. Part of that transformation is focused on a new data center network architecture that relies on software-defined networking and network automation.
SDN is way of [avoiding] a manual process [or using] a command-line interface to change the network. Now I don't need the expertise of a highly trained network engineer. I'd rather have a set of developers that put together an automated system with the ability to take the instructions at a management-level layer of the app and give it to the controller.
executive vice president, Dell Inc.
Dario Zamarian, Dell's executive vice president and general manager (and a former Cisco executive), spoke with Margie Semilof, an editorial director at TechTarget, about some of the company's positions on data center network architecture developments.
Can you talk about Dell's virtual network architecture? Specifically about the importance of network programmability and where it fits into the private or hybrid cloud.
Dario Zamarian: Virtual network architectures are the cornerstone of the dynamic data center. The whole idea of giving value and priority to the application in the data center is where the value [lies]. [Until now] network designs in the data center were blueprints that did not have any correlation to what sat on top. With the advent of virtualization and [the dawn] of competitors to Cisco, the game has been turned upside down. A proper data center network design [now] has to start with [the questions]: What do I want to do at the virtualization layer? How can I move my [virtual machines] VMs around? How can I put my application in a VM? How can I profile the networking needs of those applications? And how can I make sure that when those apps get distributed across the data center, they can utilize the best possible underlying network? Therefore, [software-defined networking] SDN finds its place in dynamically adjusting the network that sits on top of the application layer to be the best possible fit for the application.
To do that you need a three-layer architecture where you have the app and management layer, the controller, and the switches themselves. The controller becomes different than a dispatcher of instructions that can be taken from the upper layer. Now we have OpenFlow-based, standards-adopted switches ... with all the pieces together. We are not the only one, but we think we have an opportunity to link the app layer to the controller and the switch.
Where do network virtualization and SDN meet?
Zamarian: SDN is way of [avoiding] a manual process [or using] a command-line interface to change the network. Now I don't need the expertise of a highly trained network engineer. I'd rather have a set of developers that put together an automated system with the ability to take the instructions at a management-level layer of the app and give it to the controller, and have switches that don't need to be programmed. Suppose you want to transfer a lot of data overnight between two points. Wouldn't it be great if the app could initiate the opening of a VPN tunnel with a lot of bandwidth without having anyone manually intervene? So SDN becomes an opportunity to have the network be programmed on a real-time basis based for the needs of the application.
How does Dell integrate this kind of networking into cloud orchestration architectures, such as OpenStack or CloudStack?
Zamarian: Let's recognize that OpenStack is a new area. You can argue that it's not prime time when it comes to the majority of the customers being able to put the whole private cloud on OpenStack. But we're pretty bullish on OpenStack because it allows for the usual open source development ecosystem to kick in with a lot of innovation. We are participating in that [by developing] a number of extensions to the OpenStack community. We are systematically developing modules and plug-ins across networks, servers and storage to an OpenStack architecture. If you are a cloud service provider, you can use our storage, servers and networking without relying exclusively on a VMware or Microsoft hypervisor. [This will eventually be the case for private cloud customers, too]. OpenStack can become a viable alternative to that. It will take a bit of time because there are different [levels of] sophistication and use cases that customers have. Therefore, by definition, OpenStack starts at the high end where cloud service providers find economic value, but for the majority, this is still a hypervisor-dominated space.
More on new data center network architecture strategies
Big Switch: SDN, network virtualization and an army of partners
Big Blue enters the network virtualization battleground
Arista: SDN is about programmable networks, not overlays
Plexxi SDN includes tiered controller, data center-based WDM
Dell has a strong data center fabric strategy. Do we need fabrics in a world of SDN or vice versa?
Zamarian: A few years ago, fabrics were at the center. After that period of infatuation with the word fabric -- it was supposed to solve every possible networking data center problem -- we've recognized that there are very different kinds of customers out there. Customers have different degrees of problems in the data center. Layer 2 end-to-end fabric has been the promise of companies like Juniper and Brocade. But we've seen they are not only extremely complex technologically to pull off, but they are also not as needed as originally thought. You will see more Web 2.0 data centers going into a balancing act between islands of Layer 2 networks connected to Layer 3 products. The span of control of a single flat switch has met the realities of how you manage these networks. So connecting the old confined Layer 2 networks to Layer 3 is what the most advanced Web 2.0 data center companies are doing. Fabrics are not dead, but I think they [have] a reality check in terms of the operational requirements and needs of a data center. Also, SDN is becoming an interesting augmentation of fabric. We don't even call it fabric. We call it an end-to-end, Layer 2, Layer 3 data center network. What we care about is making sure the virtual machines and the server compute nodes get connected to a non-blocking, high-bandwidth connection to every rack and every chassis in the data center.
How important is the move into 40 Gb and eventually 100 Gb Ethernet considering new network architectures?
Zamarian: It’s very important. When every server has the ability to have 10 Gb connections, and apps are doing data access and data management, and you have more density of virtual machines on a per-server node, bandwidth needs will be exponential. Having 10 Gb on a server basis, by definition, will require upstream traffic to be either 40 or 100 Gb. We aren’t seeing 100 Gb as a must have. But we have seen that if you develop a 40 Gb-based, non-blocking distributed core architecture where you have a spine and leaf and scale linearly, now you have a backbone of a 40 Gb network that allows for maximization of east and west routing within a chassis, [as well as from] one chassis to another to a distributed core. With the combination of 10 Gb and 40 Gb, but more importantly the architecture that you use to put them together, there is enough bandwidth in the data center to go for a while. Most of the traffic now will be virtual machine to virtual machine; you can call it east and west traffic. For that you want to optimize locally first, before you go all the way to a backbone with 100 Gb or more. You'll have demand coming from the end user outside of the data center, but for the time being, that doesn't require 10 or 40 Gb to be on the WAN link as well.