The first article in this series discussed the overhead that virtual or soft switching adds to network-intensive...
workloads. Directly attaching peripheral component interconnect devices offers one option to reduce overhead, but single-root I/O virtualization, or SR-IOV, provides a mostly superior alternative.
SR-IOV is a peripheral component interconnect (PCI) standard that has important performance implications for virtual networking.
The SR-IOV specification splits the network card of a host into physical and logical functions.
The physical function handles the link characteristics, while the virtual function presents a number of logical PCI interfaces the administrator can assign to guests.
These virtual interfaces are logical -- effectively dedicated -- devices. This allows the network card to forward traffic directly to the correct virtual machine (VM), without requiring a virtual switch to duplicate a forwarding decision.
The reduction in overhead should benefit all guests, but network-intensive actions benefit the most.
Superficially, the SR-IOV specification looks like an easy win for the administrator. SR-IOV seemingly offers higher throughput, with the more efficient use of existing resources. However, there are some fairly obvious questions:
- How do I get and enable SR-IOV?
- What is the actual benefit?
- Why isn't it used all the time?
The answers are not straightforward.
SR-IOV support hard to evaluate
From a hardware perspective, if you commissioned your data center infrastructure in the past five years, the requisites underpinning SR-IOV are probably in place. Furthermore, if you buy -- or more likely rent -- your hardware, the SR-IOV specification is likely buried in your most recent request for proposal. However, for a couple of reasons, SR-IOV is surprisingly hard to evaluate. You will need the following:
- a CPU with Intel Virtualization Technology for Directed I/O support;
- a CPU with an Intel Input/Output Memory Management Unit driver; and
- a network card that supports SR-IOV.
As a rule, only server-class NICs have SR-IOV functionality. What's more, the number of virtual functions, or logical interfaces, varies between individual cards. Budget options may support seven per port, while 10 Gigabit Ethernet converged network adapters may support 60 or more. Each virtual function can be mapped to a VM interface, either dynamically or manually, as the hypervisor allows. Other issues include the following:
Network cards. Because SR-IOV is a PCI standard, PCI Express network cards are a requirement. To test SR-IOV, you will need server-grade components, which means a relatively substantial investment in space and cash. One other point: Some OEMs deliberately disable SR-IOV functionality in chipsets that would otherwise support it. This isn't an issue if you're buying new chipsets, but if you're investigating using SR-IOV with used or older equipment, this can be a source of significant frustration.
Access control service. While not technically a requirement, access control service (ACS) should be implemented on the PCI controller. Without it, the function of SR-IOV is compromised, requiring significant kernel intervention. Sadly, Intel only implemented ACS on the Xeon E5 and a handful of Core i7 workstation CPUs. This makes SR-IOV a difficult proposition on anything other than data-center-class devices.
Software requirements. Common hypervisors, such as a VMware ESXi, KVM, Xen and Microsoft HyperV, all support SR-IOV. However, in the case of ESXi, the user interface is not exposed unless vSphere is also present. Larger Amazon Elastic Compute Cloud instances support enhanced networking for guest VMs, and Microsoft Azure now has limited support. Windows and most Linux distributions can use SR-IOV virtual functions, but the specification's support among virtual appliances is spotty, including many cloud-native products. Given that SR-IOV has been around since 2009, this lack of support is surprising.
What to watch for when enabling SR-IOV
When enabled, the SR-IOV specification has some side effects. The process of attaching virtual network functions to a guest is performed preboot. As a result, features such as pausing guests, hot networking functions and vMotion are disabled. This adds credence to the belief that SR-IOV is not particularly suited for dynamic web or database workloads. However, for edge firewall or other content-security requirements -- which tend to remain relatively static -- the loss of these features is probably acceptable.
The SR-IOV specification gives the network administrator an additional tool for workload optimization. Clearly, it is not a panacea, but it could significantly extend the life span on rented or owned compute resources. If you recently told your accounting department that, thanks to Spectre and Meltdown, standing still is risky and could be costly, using SR-IOV could win you back some brownie points. Time to start investigating your server specs.