vSphere DirectPath I/O (DPIO) is a vSphere feature that takes advantage of VT enabled processors installed in ESXi hosts in order to improve perfomance for virtual machines. A processor feature in some Intel and AMD CPUs referred to as I/O memory management unit (MMU) remaps direct memory access (DMA) transfers and device interrupts. This allows virtual machines to bypass the vmkernel and gain direct access to physical hardware. Whilst available in vSphere 4, DPIO has been enhanced for vSphere 5. For example, vMotion is now supported for virtual machines with DPIO enabled on Cisco UCS hardware, though this Note that in order to take advantage of this feature, the VM must be configured with DPIO on Cisco UCS through a Cisco Virtual Machine Fabric Extender (VM-FEX) distributed modular system.
Other than VMs configured with DirectPath on Cisco UCS on a VM-FEX distributed modular system, enabling DPIO for a virtual machine makes the following features unavailable to that virtual machine:
- vMotion
- Hot adding and removing virtual devices
- Record and replay
- Fault Tolerance
- HA
- Snapshots
By interfacing directly with the hardware, DPIO can improve VM performance by reducing CPU cycles, which would have otherwise been used for vmkernel translation. DirectPath I/O is an advanced feature and is only recommended for environments with substantially high network workloads. Anything that is extremely time/latency sensitive may benefit from DPIO. More about the performance benefits here.
Configure DirectPath I/O on an ESXi Host
There are a number of prerequisites for getting DPIO to work on your hosts. First of all you need to ensure that your ESXi host’s processor supports passthrough and that virtualisation support is enabled. You can check this in the vSphere client by browsing to your host, then the Configuration tab and Advanced Settings. If you see the message: “Host does not support passthrough configuration”, then you should check to see that the server’s BIOS is configured correctly.
Once you have made the necessary changes the ‘not supported’ message should be replaced by a message stating that there are “No devices currently enabled for passthrough.”:
So, now you can click Configure Passthrough, then select the devices to be configured for passthrough:
Select the device that you’re configuring passthrough on and click ok. Note: if you pick a device that ESXi is already using, you’ll be presented with a warning message. This might be very useful in the event that you accidentally pick the wrong device:
The devices should now appear as available for direct access. Note: If the device icons contain an orange arrow, you will need to reboot the host before the device can be assigned to a virtual machine:
Configure a PCI Device on a Virtual Machine
The last step is to configure a virtual machine to use the device. The following process describes how to do this:
- From the vSphere Client, select a VM.
- Click Edit Settings.
- With the Hardware tab selected, click Add.
- Select PCI device, click Next.
- Select the Passthrough device configured previously, click Next.
- Click Finish.
There are a few things to be aware of here, along with the limitations mentioned earlier:
1. A virtual machine must be powered off prior to adding PCI devices.
2. Up to 6 PCI devices can be added to a VM running on ESXi 5.
3. Adding a DirectPath device to a VM sets memory reservation to the memory size of the virtual machine.
References and Useful Links
Thanks to the following articles, where much of the info here was gathered..
- vSphere 5.0 Networking Guide
- Configuration Examples and Troubleshooting for VMDirectPath ESX 4.0
- What’s New in Performance in VMware vSphere 5.0
- KB – 1010789
- Performance and Use Cases of VMware DirectPath I/O for Networking
- Performance Best Practices for vSphere 5 ⇑