By Ivan Pepelnjak, CCIE No. 1354, Fast Packet Blogger
VMware has recently launched the vCloud Director product, which gives vSphere administrators -- who are usually server administrators -- the ability to create logical networks, route between them, use Network Address Translation (NAT) and even implement firewalling between the logical segments. The increasing overlap between the responsibilities of a network administrator and virtualization administrators who control vCloud functionality is worrying for some networking engineers, but the implementation details are sure to make some people explode with rage.
According to the VMware vCloud Director Evaluator's Guide, vCloud Director uses three mechanisms to isolate network pools:
- Portgroups within a single physical vSphere server;
- MAC-in-MAC encapsulation, known as VMware vCloud Director Network Isolation technology (VCDNI).
When using the last mechanism, the vSphere servers implement their own logical layer 2 and layer 3 networks, including NAT and firewalling functionality on top of the bridged data center infrastructure, totally bypassing the networking team.
Not surprisingly, VMware is heavily promoting VCDNI technology, officially because it offers better security and prevents inter-VLAN hopping (that's basically admitting the company's hypervisor-embedded switch is broken enough to allow that ancient attack). But behind the scenes VMware is telling server administrators that their lives will be easier because they will be able to bypass the network administrators.
Let's step back a bit. I would love to have a layer-3-only data center network, where every server (physical or virtual) would have its own IP address and the only worry I would have would be to provide end-to-end IP transport between the servers and between the servers and the end-users, meeting my SLA and QoS guarantees.
My wish might be fulfilled if I worked for Google or Facebook (they can't play with fancy features because they're too busy growing their networks and keeping them up and running). But in a typical enterprise data center we were forced to implement today's hodgepodge because:
- Designers of IPv4 chose 32-bit address length and everyone chickened out when we wanted to implement IPv6 15 years ago, so we're stuck with NAT;
- Switch vendors decided to charge premium for layer 3 functionality, even though forwarding IP packets based on host routes should be as cheap or even cheaper than forwarding layer-2 frames based on MAC addresses;
- We are forced to use load balancers to hide server farms behind a single IP address because we rely on a 30-year-old transport protocol (TCP), there is a lack of session layer in TCP/IP, and we have a broken socket API;
- Operating system vendors tried to weasel their way around limitations of their platforms by implementing crazy schemes like Microsoft Network Load Balancing;
- Some server administrators were unable to secure their platforms, so we had to firewall them ... and because there were enough incidents, we're now forced to use firewalls to be compliant with industry regulations (for example, PCI).
And now a virtualization vendor is trying to persuade people with limited exposure to internetworking issues (I'm positive there are exceptions, but not many server administrators have been building and operating large data center networks) to rebuild the hodgepodge from scratch using untested virtualized components and a GUI. Wish them luck ... and make sure your CIO knows who's taken over the responsibility for the stability and security of the new cloudy infrastructure.
About the author: Ivan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting and operating large service provider and enterprise WAN and LAN networks and is currently chief technology advisor at NIL Data Communications, focusing on advanced IP-based networks and Web technologies. His books include MPLS and VPN Architectures and EIGRP Network Design. Check out his IOS Hints blog.