After four years of operating mostly in secret, the company timed its emergence carefully, not just announcing its Network Virtualization Platform, but also revealing the names of several major companies that have deployed its network virtualization architecture into production, including AT&T, Rackspace, eBay, NTT and Fidelity.
In this Q&A with SearchNetworking.com, Nicira CTO and co-founder Martin Casado and vice president of marketing Alan S. Cohen talk about the company's network virtualization architecture.
Nicira Networks is describing itself as a network virtualization company. What is your definition of network virtualization?
Martin Casado: Network virtualization to me has three components. When you virtualize anything, what you end up with must look like what you started with. When you virtualize the network, the final solution must provide a network that looks like the original one; otherwise you limit the workloads that can live in this new domain.
When you [perform] x86 server virtualization, the operating system doesn't know it's not running on the physical machine. Network virtualization is where you build a solution where you can create logical networks on top of a physical network that [have] all the same properties of the physical network.
The second component is [that] all the mapping of the management of that logical [network] to physical view is done totally programmatically. With server virtualization, servers virtualize compute, storage and memory, and anytime things move within the server or new VMs [virtual machines] spin up, all of that has to be done automatically. The same thing [must happen] with network virtualization. You create logical networks, you expose them to VMs and then, anytime things change, it's all automatically patched up.
The last component is that [network virtualization] should be compatible with any hardware. It should work with any vendor. Virtualization does decoupling and the decoupling should be independent of the underlying hardware. Network virtualization is simply creating a logical view of the network and mapping it to a physical view.
Alan S. Cohen: When people talk about other approaches, like using OpenFlow, they're still tied to hardware. Network virtualization doesn't equal OpenFlow.
What is Nicira's Distributed Virtual Network Infrastructure (DVNI) and how does it compare to other examples of network virtualization architecture, such as software-defined networking?
Casado: software-defined networking is just a general parad igm in which the control plane is decoupled from the data plane. You could use this to run a backbone network or a wireless network. Software-defined networking does not equal network virtualization. It's just one way of creating networks.
DVNI is a network virtualization solution where the intelligence resides at the edge of the network. It's controlled using software-defined networking, and it allows you to create a logical network that is fully independent of the hardware. An OpenFlow solution would try to emulate the same thing, but it would require OpenFlow hardware. We don't require you to change your existing hardware or upgrade. And if you do, it doesn't have to be on an OpenFlow-compatible design. It could be OpenFlow, but it doesn't have to be.
Another thing that differentiates us almost uniquely: We introduce a new address space, which means that our logical networks can look like physical networks no matter where we are. Even if the physical network is L3 [Layer 3] we can give you an L2 [Layer 2] network. Or if it is L2, we can give you an L3 network. We are totally decoupled from physical networks.
Most network virtualization solutions today don't provide you with a virtual network. They provide you with a subset of the existing network, a basic technique called "slicing." VLANs are slicing -- they will take the existing network and give you a piece of it.
Instead of giving you a piece of that, we give you an entirely new network that looks exactly how you want it to look. A VLAN will give you your own little segment of the world, but if you have IPv4 infrastructure, it won't allow your VMs to send IPv6 traffic. It doesn't change the way that the logical view looks, and it doesn't change the physical network.
With our approach, even if you have IPv4 infrastructure, we could allow the VMs to have IPv6. We introduce an entirely new world.
Can you tell us more about the intelligence at the edge of your network virtualization architecture?
Casado: Nicira is the main developer of Open vSwitch. [The intelligence is] either in an Open vSwitch [on the server] or at top-of-rack. For this announcement, it is in the Open vSwitch. Within the server, we deploy the Open vSwitch, whether Xen or KVM or VMware. And that Open vSwitch, under the coordination of a controller, will create a set of L2 and L3 tunnels [between the physical network and the server hypervisors]. With this, we can create an illusion of a virtual network that will allow us to have any VM run anywhere over any type of hardware.
More on network virtualization
Read Martin Casado's SearchNetworking.com blog post on edge overlay software
Building a private cloud network
Network virtualization technology FAQ: What you need to know
Channel Chat podcast: Avaya talks network virtualization architecture
What does Nicira's Network Virtualization Platform actually do?
Casado: DVNI is a general architectural approach, which says you have intelligence at the edge. The Network Virtualization Platform is the product; it's our instantiation of DVNI.
How do you abstract the physical network? Are you using OpenFlow?
Casado: We have a set of servers that are controllers, and they talk to these edge devices -- the Open vSwitch or top-of-rack switches. That communication is using OpenFlow. Because Open vSwitch is something we developed, it doesn't matter that it's OpenFlow. It could be any other protocol and the customers wouldn't know the difference.
The magic in creating this new view is within this intelligent edge. We map between this virtual view of the network and the physical view of the network. When a packet leaves a VM, we do lookups in the virtual world and then we map that into the physical world. We send [the packet] to the physical world. Then we transport it back from the physical world into the virtual world, where we do some more computation on it.
It's very similar to how server hypervisors work. They manage these virtual address spaces and map [them] to the physical address space. We manage these virtual network address spaces and we map it to the physical address space along the edge in real time.
Who are your customers within an IT organization? Network architects? Virtualization administrators?
Casado: Primarily we target the cloud architects. Cloud architects understand why networks get in the way when they build these things out. We do work with some network architects, but they tend to be fairly forward-thinking.
Cohen: We've seen service providers that are already organized for cloud, and a certain set of enterprises have begun to organize around cloud. These are multidisciplinary teams -- people who have server virtualization skills as well as storage and networking experience. But that is a fairly nascent movement on the enterprise side. You will see more people start to organize their infrastructure teams into these cloud units. They'll break down the silos of, "I'm a server guy, I'm a storage guy, I'm a networking guy."
Do you need to build relationships with the network hardware vendors?
Casado: No, we don't. These will become two different problem domains. Eventually Microsoft and VMware will also take similar approaches. [The network] hardware will essentially become a backplane. It will become a fabric. That fabric will still have to be competitive. It will have to be competitive on price, on the ability to do QoS, on the scale it can achieve [and] on the latency. That will be a separate entity in the market from the virtual network, which provides the provisioning, the security policies, the QoS policies, isolation and things like that.
As soon as there are [hardware] partnerships in virtualization, you aren't really virtualizing. Virtualization in the past, by its very nature, decoupled the things that would be virtualized. While I think that hardware will adapt to be more amenable to virtualization, just like Intel [servers] adapted for VMware, I don't think there needs to be any tight partnerships between the [network] virtualization companies and the hardware.
You announced several significant customers (AT&T, Rackspace, Fidelity, eBay and NTT). What kind of scale are they achieving with your solutions?
Casado: We have production deployments with production traffic. I'm not allowed to give out the numbers because they are customer-sensitive. But I can tell you these are hundreds of servers and thousands of VMs. These aren't one rack, but are many hundreds of servers.
Are you specifically focused on cloud provider networks? At this point, what kind of enterprise would need this level of abstraction and control?
Casado: I don't characterize it as about cloud as much as it is about virtualization. I think people buy generally into [the idea behind] server virtualization, which is [that] you virtualize your servers and you should have some level of operational flexibility and vendor independence. But the truth is, you virtualize your servers and you have fairly limited operational flexibility and fairly limited vendor independence, particularly when it comes to the network. Our core focus is virtualized data centers, whether or not used in cloud model. We can add value just by unlocking all that latent potential for server virtualization.
Cohen: Because we have a software model, nobody has to buy a big box. We can start in the enterprise. The question is: Does the enterprise have the recognized need and the pain points? It's not a question of whether an enterprise is attracted to this value proposition. It's a question of where they are on the [virtualization] maturity curve.
Let us know what you think about the story; email: Shamus McGillicuddy, News Director