Wednesday, July 10, 2013

N-Port ID Virtualization (NPIV)

N-Port ID Virtualization (NPIV) enables a single Fibre Channel HBA port to register several worldwide port names (WWPNs) with the fabric. Each address appears as a unique entity on the Fibre Channel fabric. Each of which can be assigned to an individual VM. When VMs do not have WWN assignments, they access storage LUNs with the WWNs of their host’s physical HBAs. By using NPIV, a SAN administrator can monitor and route storage access on per VM basis.
When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx) is updated to include a WWN pair (a World Wide Port Name, WWPN, and a World Wide Node Name, WWNN). When that VM is powered on, the VMkernel instantiates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA, that is, it has its own unique identifier, the WWN pair that was assigned to the VM. Each VPORT is specific to the VM, and the VPORT is destroyed on the host and it no longer appears to the FC fabric when the VM is powered off. When a VM is migrated from one host to another, the VPORT is closed on the first host and opened on the destination host.
NPIV in the SAN
NPIV Advantages:
The ESXi leverages NPIV to assign individual WWNs to each VM, so that each VM can be recognized as a specific end point in the fabric. The benefits of this approach are as follows:
  • Granular security: Access to specific storage LUNs can be restricted to specific VMs using the VM WWN for zoning, in the same way that they can be restricted to specific physical servers.
  • Easier monitoring and troubleshooting: The same monitoring and troubleshooting tools used with physical servers can now be used with VMs, since the WWN and the fabric address that these tools rely on to track frames are now uniquely associated to a VM.
  • Flexible provisioning and upgrade: Since zoning and other services are no longer tied to the physical WWN “hard-wired” to the HBA, it is easier to replace an HBA. You do not have to reconfigure the SAN storage, because the new server can be pre-provisioned independently of the physical HBA WWN.
  • Workload mobility: The virtual WWN associated with each VM follows the VM when it is migrated across physical servers. No SAN reconfiguration is necessary when the work load is relocated to a new server.
  • Applications identified in the SAN: Since virtualized applications tend to be run on a dedicated VM, the WWN of the VM now identifies the application to the SAN.
  • Quality of Service (QoS): Since each VM can be uniquely identified, QoS settings can be extended from the SAN to VMs.
Requirements for Using NPIV
  • NPIV can be used only for VMs with RDM disks. VMs with regular virtual disks use the WWNs of the host’s physical HBAs.
  •  HBAs on your host must support NPIV.
  • Use HBAs of the same type, either all Brocade or all QLogic or all Emulex. VMware does not support heterogeneous HBAs on the same host accessing the same LUNs.
  • If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the VM. This is required to support multipathing even though only one path at a time will be active.
  • Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIV-enabled VMs running on that host.
  • The switches in the fabric must be NPIV-aware.
  • When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number and NPIV target ID match the physical LUN and Target ID.
  • Use the vSphere Client to manipulate virtual machines with WWNs.
NPIV Capabilities and Limitations
ESXi with NPIV supports the following items:
  • NPIV supports vMotion. When you use vMotion to migrate a VM it retains the assigned WWN.
    If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel reverts to using a physical HBA to route the I/O.
  • If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the concurrent I/O to two different NPIV ports is also supported.
When you use ESXi with NPIV, the following limitations apply:
  • Because the NPIV technology is an extension to the FC protocol, it requires an FC switch and does not work on the direct attached FC disks.
  • When you clone a virtual machine or template with a WWN assigned to it, the clones do not retain the WWN.
  • NPIV does not support Storage vMotion.
  • Disabling and then re-enabling the NPIV capability on an FC switch while VMs are running can cause an FC link to fail and I/O to stop.
Thanks to VMware, Information is from the white paper provided by VMware.
    

0 comments:

Post a Comment