Could you define the challenges that server virtualization generally presents to networking professionals?
The biggest concern we've had from the networking departments over the last couple of years was visibility into where the networking traffic goes. It stopped at the physical box boundary, even if there were 20 virtual machines running on that box. And that meant that their ability to do inspection and policy on a per-workload basis was a bit compromised.
That is the other area of concern, and it is related. Not only do they have some challenges seeing to the virtual machine within a given server. It can move. And thus any policies that you may have set up targeting that physical machine would no longer be in place unless you already set it up for the other one as well. So you're right, it's the combination of visibility and mobility that has caused the problems of control that they're after. Could you describe how VMware is working to solve those challenges?
Starting about six years ago, we recognized that storage was a key area for us to get visibility into, and to really bring the storage administrators into virtualization. And starting about three years ago, we recognized the same thing was happening very rapidly on the networking side. We've done a lot of things around Netflow and other protocols to make sure those worked even in our last release. But in vNetwork Distributed Switch. The goal of this distributed switch is to provide a logical view of the network that applies automatically to all the servers that are part of it. So we've gotten rid of a lot of the configuration challenges around the network and we've allowed the mobility challenge to go away.
This way when you do vMotion and you live migrate a virtual machine from one physical machine to another, its network state and policy will travel along with it. When we do that, we've enabled a lot of those challenges to go away.
Virtualization has kind of forced some of the silos that exist in IT departments to interact more than they have in the past. It started with the storage team having to be tighter with the server team.
The vNetwork Distributed Switch is for network administrators to have the visibility and control that they're used to and require, but done in the context of your virtual infrastructure. So they can change policies and interact with the system. But that all happens without slowing down the provisioning process for which people use virtualization. The answer is it's for the virtual administrator as well as the networking team. Culturally speaking, there's a leap of faith that networking professionals must make in order to feel comfortable with handing over to the system guys some of the control they're used to having. Is this something that you as a company are trying to facilitate?
That is exactly what we see in a few cases where departments need to interact in a way that is different from what has happened in the past. When we started earlier on, the way it would work is the server team would request a big chunk of storage and they would request a trunk port of some kind from their networking team. They'd say, "Just give us this and we'll take care of the rest. And we'll carve it up and use it." And that worked for a while, until virtualization started to get 50% and more of the data center involved.
At that point it's even more critical that the storage team understands capacity management and the networking team knows where the packets are going and can put the right quality of service in place. It is exactly at that point in time when you need to have the right tools and visibility in place so they can do their jobs and interact with the virtual infrastructure. It's not just storage and networking. It's also security teams and application teams. All the teams in the data center need to have the same balance of comfort and visibility with the big advances you get with virtualization. Would you say that you still have more work to do with regard to helping the networking professional get a handle on what's happening with the virtual infrastructure?
I think we made a pretty major step with vSphere. The levels of visibility that they have are quite important. And the other area that I didn't bring up earlier was performance and performance management. We've done a fair amount of that with vSphere.
But what a lot of network administrators care about deeply is that they can logically partition their network and make sure they're assigning the different levels of bandwidth and latency as requested by the teams. So we have a bit more work to do in order to enable total quality of service definition that will enable the networking team to reach all the way up into the virtual machines. And we're working with partners on that. Some people I talk to about virtualization have mentioned that the virtual switching VMware and other vendors provide lives mostly in Layer 2, and there are some enterprises that want more, Layer 3, 4 and beyond. What can you do about that?
Yeah, certainly if you're a pure infrastructure play, you think at the L2 layer. Essentially you're not having any smarts involved with what's going on. As we work toward 100% virtualization, you need to be more plugged into how traffic is being routed. You need to know about security. And that's really where the push is -- to make sure we can provide application-level firewalls. So we created a set of APIs called the VMsafe APIs.
We also, just a couple of days ago, announced our first security product ourselves, which is an L4-7 aware firewall, vShield Zones (part of an acquisition of a company called BlueLane) that runs completely in a virtual machine. So that brings some of those capabilities inward.
And in the last area, around increasing levels of visibility, we are increasingly seeing people connect data centers using virtualization. That might be for disaster recovery. But, over time, this ability to flexibly move across data centers will require more spanning of maybe the L2 layer but maybe even all the way up to traffic management layers. You need to know where things are running and redirect your incoming requests to that particular virtual machine. I see that growing more and more with us and our partners to a higher-level awareness of where things are running in our data centers. vSheild allows people to create logical separation zones on a network even on a completely shared infrastructure. Would you say that most of your customer base is made up of systems guys at this point?
We have about 130,000 enterprise customers. And a lot of them are organized slightly differently. We have seen a lot of companies that have a center of expertise that they create, where they cross servers, networking and storage. They create a virtual infrastructure team. At least a lot of the more advanced ones have taken on that approach as they go toward 100% virtualization. And that way they have input from all the owners of the data center and can move very aggressively to get best practices in place and get the tools there set up.
At this point, we sell at a fairly high level in the organization, which naturally has to have input from a number of different teams. So then your sales engineers and marketing people, they need to be able to communicate not just with systems guys but storage and networking, and they need to speak all these different languages?
We're a very partner-driven company, especially on the sales side. We have value-added resellers as well as the big OEMs and even the big system integrators that all have relationships with the customers. As a company, we started more at the individual departments, in the test and development areas. We've grown up to be a pretty major part of the data center, and we've gone up higher and higher in the hierarchy. The main thing that happens as you go higher is you speak less about speeds and feeds and more about the business challenges they have and how they're looking to save on operating and capital expenses. How are you working to help network security professionals get a better handle on what's happening within the virtual infrastructure? They want to know that something like an intrusion prevention system is going to work. Are you partnering with companies? Are you designing new products?
It's been a lot of different approaches we've taken over time. As we got started our whole approach has been to fit into the existing data center, whether you have a Checkpoint firewall on your perimeter or you have traffic management systems. First of all, it was making sure that we fit into them and, where needed, we are able to integrate with them as physical appliances and say that a vMotion occurred, so start watching it here.
The transition that started to occur is that we've seen more and more network vendors begin to look at software deployment models of their appliances. And this VMsafe API is a set of interfaces they plug into that allows them to bring their IDS or IPS system closer to the application and also bring it to an entirely virtualized instance. So it moved from being a strict physical appliance on the perimeter to also having a software one that's within the virtual infrastructure.
In that world we have a very nice spot in the overall computation because we see every networking packet going on. We can give them the level of visibility they need to do the job. But what's more disruptive is we're seeing a pretty good set of partners from Checkpoint to McAfee and Symantec deliver security products that have a network component also have a host-based component to them. So they can watch packets come in and say, "This one looks like a bad one." And they can actually correlate it with what's running in the virtual machine at the same time.
I think we'll see more security appliances that are delivered fully in software that can do even better levels of security by correlating more events together. But we do have to make sure that we work with what they have today and set a model for future security products. Many switching vendors say they are coming out with features that will make them virtualization aware. But when they describe what they are doing, it's very vague. What's your sense of what the switching vendors are doing in order to become more integrated with virtualization, and do you see a need for some standards-based approach?
Different vendors have their own areas that they excel at, and their priority is to make sure that capability is virtualization aware. What that typically means is they have better visibility into when things are running and they know when things move around. So we developed a pretty good set of APIs that a lot of them are attaching to that give them that basic level. So if you are a Netflow protocol analyzer you can actually see at the virtual machine level what is happening. Or you can very simply be indicated when a vMotion occurs. You as a network switch now say, "OK this thing is actually on server B instead of server A."
The standards question is very good and there are some early efforts going on. VMLink is the one I'm more aware of, which Cisco proposed. And this is a way of routing traffic between virtual machines by simply adding another tag to the packets that tells you what VM it's from and where it's going. And that will allow them in hardware do some of the routing decisions that are today done purely in a software solution. I believe that's been proposed to a standards organization. The early discussions for that are in place. From our perspective, we won't be taking particular sides, but we do agree that if we can make a standard way that we make switches more aware of virtualization, that's good for everybody. As companies dive deeper into virtualization and move past that 50% mark, the traffic from the aggregation layer to the core of the network sort of explodes. Is there anything that VMware can do to help manage that at all?
Certainly as you get denser levels of computation going on in the data center, more traffic and more storage gets into a denser area. But we've seen a variety of techniques that have always been used to do this. You'll still have perimeter routing devices, but you'll also have interior devices which are actually giving a nice opportunity for the switch vendors.
Today, when you have things that are on the perimeter, you tend to overprovision them pretty heavily because you know that all traffic goes through them. If you actually know that you can get traffic inward and think about smaller devices internally that are more customized for the virtual machines that are there, we feel that over time we can actually build this hierarchy of right-sized networking devices that will really help them be overall more efficient. And with vMotion capabilities the opportunity is there to bring different workloads that are talking to each other maybe to the same box or even closer to one another. So we think we can get this locality to take place across the network traffic that's going on and ultimately isolate them from one another.