Wednesday, July 10, 2013

Storage vMotion

Storage vMotion leverages the same technology that is used for vMotion but applies to migration of virtual disk files. Storage vMotion allows VMware to implement a new patented load balancing technique for virtual machines based on storage usage and load. Storage vMotion can also be performed individual virtual machines. Storage vMotion is storage-type independent and works across NFS datastores as well as across VMFS datastores on Fibre Channel, iSCSI, and local SCSI storage.
The Process:
The Storage vMotion process is fairly straight forward and not as complex as one might expect.
  1. The virtual machine working directory is copied by VPXA to the destination datastore.
  2.  A “shadow” virtual machine is started on the destination datastore using the copied files. The “shadow” virtual machine idles, waiting for the copying of the virtual machine disk file(s) to complete.  (A new vpx process get started in the same host)
  3. Storage vMotion enables the Storage vMotion Mirror driver to mirror writes of already copied blocks to the destination.
  4. In a single pass, a copy of the virtual machine disk file(s) is completed to the target datastore while mirroring I/O.
  5. Storage vMotion invokes a Fast Suspend and Resume of the virtual machine (similar to vMotion) to transfer the running virtual machine over to the idling shadow virtual machine.
  6. After the Fast Suspend and Resume completes, the old home directory and VM disk files are deleted from the source datastore.
Note: shadow VM is only created in the case that the VM home directory is moved. If and when it is a “disks-only Storage vMotion, the VM will simply be suspend/resume.
 
Mirror Driver
Mirror driver; it mirrored the I/O. when a VM that is being Storage vMotioned writes to disk, the write will be committed to both the source and the destination disk. The write will only be acknowledged to the VM when both the source and the destination have acknowledged the write. Because of this, it is unnecessary to do re-iterative copies.
Datamover
The hypervisor uses a component called the datamover for copying data and when provisioning new virtual machines. The datamover was first introduced with ESX 3.0 and is utilized by Storage vMotion as blocks will need to be copied between datastores.
  • fsdm – This is the legacy 3.0 datamover which is the most basic version and the slowest as the data moves all the way up the stack and down again.
  • fs3dm – This datamover was introduced with vSphere 4.0 and contained some substantial optimizations so that data does not travel through all stacks.
  • fs3dm – hardware offload – This is the VAAI hardware offload full copy that is leveraged and was introduced with vSphere 4.1. Maximum performance and minimal host CPU/Memory overhead.
In ESXi if VMFS volume with a different block size or a different array is selected as the destination, ESXi reverts to the legacy datamover (fsdm). If same block sizes are used, the new datamover (fs3dm) will be utilized. Depending on the capabilities of the array, the task will be performed via the software stack or offloaded through the use of VAAI to the array.
Thanks to VMware, Information is from the white paper provided by VMware.
    

0 comments:

Post a Comment