Determine requirements for and configure NPIV

by admin

What is NPIV ?

According to Wikipedia:

“N_Port ID Virtualization or NPIV is a Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in Storage Area Network design, especially where virtual SANs are called for. NPIV is defined by the Technical Committee T11 in the Fibre Channel – Link Services (FC-LS) specification.”

Basically it is a standard with which a single HBA can register on the fabric with multiple WWPNs (World Wide Port Names), by making use of what can be thought of as virtual WWNs. NPIV allows you to go beyond the usual limit of one WWPN per Host Bus Adaptor. This allows virtual machines to have their own individual WWPNs, which in turn allows them to take advantage of features such as QoS, and offers the ability to zone storage directly to individual virtual machines. It’s worth bearing in mind, however, that the hosts own WWPN also has to be zoned into the storage, with the current vSphere implementation of NPIV.

NPIV Limitations and Implementation Considerations

  • NPIV may provide security benefits as it allows you to assign storage directly to a virtual machine, similarly to how you would assign storage to a physical server.
  • NPIV may permit certain storage network features such as QoS to be made available directly to virtual machines.
  • NPIV works only with RDM disks
  • To use NPIV, all involved hardware must be compatible. This includes the HBAs and the storage switches. Check your hardware is compatible using the VMware HCL, as only certain HBA cards are compatible with NPIV.
  • You cannot use an NPIV RDM with Storage vMotion..
  • ..But you can vMotion
  • An RDM must be preconfigured, zoned and masked directly to the physical HBA using soft/hard zoning before configuring Virtual Machine VPORTS/WWPNs.
  • Up to 16 WWNs can be assigned to Virtual Machines residing on an ESXi 5 host.
  • A minimum of 2 WWNs are required for resilience.
  • RDMs should reside on the same datastore as the VMware .vmx configuration file

Verifying NPIV Hardware Compatibility

Before you can configure NPIV, you need to ensure that your HBA is compatible.  If necessary we can check what HBA card we have from the ESXi shell by checking the contents of the /proc/scsi directory:

You can then check your HBA against the VMware HCL to ensure it is compatible. Will will also need to ensure that your switches are supported, and configured appropriately.

NPIV Configuration on ESXi 5 Hosts

First of all, you need to make sure that the lun you intend to make available directly to a virtual machine using NPIV is already available to the host(s) where the virtual machine resides.

To configure the virtual machine (already configured with the RDM)  we go the options tab in the virtual machine settings, select Fibre Channel NPIV. Click Generate new WWNs and then click finish:

If you take a look at the .vmx file at this point you will notice that there are some new entries, with values for wwn.node, wwn.port and wwn.type.

To troubleshoot any issues, review /proc/scsi/… and /var/log/vmkernel for messages relating to NPIV.

Glossary of NPIV Related Terms

FC: Fibre Channel
RDM – Raw Device Mapping: VMware feature that allows a virtual machine to access a raw LUN.
FLOGI – Fabric Login: This is performed by the N_Port device to register it’s address on the storage fabric.
FDISC – Fabric Discovery: The part of the FLOGI process where NPIV devices register their virtual WWPNs.
F_Port – FC Fabric Port: This is the port on a fabric switch connected to a host.
N_Port – FC Node Port: This is the name for a physical HBA port.
WWPN (World Wide Port Name): Each physical HBA port has one of these.
WWNN (World Wide Node Name): There is one of these per host bus adaptor.
WWN (World Wide Name): The WWNN + WWPN of a device

References and Useful Links

Keep up to date with new posts on - Follow us on Twitter:
Be Sociable, Share!

Leave a Comment


Previous post:

Next post: