The intention of this post is to take a look at storage policies, storage vMotion and Storage DRS from a design perspective, to address the exam objective focusing on these technologies. The idea here is not to explain the technologies in detail, there’s plenty of information else where – instead I will look to highlight some of the factors relating to these technologies that should be kept in mind when designing a solution that makes use of them.
Storage Policies
Storage policies are very useful in regards to VM provisioning as they can place a virtual machine on appropriate storage depending on what it’s requirements are.
Storage policies in vSphere 6 use a different format to vsphere 5.x (where they were known as storage profiles). I’ve written about how to configure storage profiles previously. When upgrading to vSphere 6, any existing storage profiles are automatically converted to storage policies, though some of the terminology changes. Instead of Virtual Machine Storage Profiles, System Defined Capabilities and User Defined Capabilities, we have Virtual Machine Storage Policies, Storage Specific Data Services, Datastore Tags and Common Data Services. Though some of the terminology has changed, the concept remains the same – you apply a storage policy to a virtual machine, and place VMs on datastores that match the requirements set in the policy. If all is good, then the VM will be compliant with its attached storage.
A common way to make use of storage policies is to ‘tag’ datastores with known characteristics – for example, ‘Replicated, ‘Non-Replicated’ or ‘Gold/Fast’. If using tags in this way it’s important to document exactly what ‘gold’ is as part of the design. Alternatively, if you have VASA (vSphere API for Storage Awareness) configured, then certain storage characteristics will be exposed to vCenter, which can be used in storage policies instead of manually tagging. Or it could be that a combination of both VASA and tagging is used.
Storage policies are a great feature, and very useful when it comes in helping meet requirements for virtual machines. For example, a requirement for a given solution might be that a particular group of virtual machines have to be on Tier 1 storage, or replicated (or both). By applying a storage policy to a VM we can build this requirement into vCenter, and determine whether it has been met by running a compatibility check. Have a look at the chapter on storage policies in the vSphere Storage guide for detailed information on storage policy configuration.
Storage vMotion
svMotion is a well known feature, used for migrating virtual machine disks/configuration file whilst a VM is running. From a design perspective you should know the requirements and limitations:
- Firstly the host on which the virtual machine is running must have a license that includes Storage vMotion. The good news is svMotion is available in all vSphere license versions apart from the Essentials kit.
- The host where the VM to be migrated is running has to be able to access the source and target datastores.
- Worth knowing the config maximums relating to svMotion. There are limits on how many svMotions that can be carried out concurrently.
There are also a number of performance recommendations detailed in the Performance Best Practices guide. Note that Storage vMotion will often have much better performance if the storage arrays are VAAI-capable. I’ve written about administering VAAI previously. In terms of design it’s important to be aware of whether the storage array in use supports VAAI. The VAAI feature called Extended Copy (XCOPY), allows for copy operations to take place entirely on the storage array rather than copy data to and from a ESXi host. This can significantly improve svMotion performance.
Storage DRS
Storage DRS can help manage storage by providing I/O load balancing and free space balancing across datastores that are configured as part of a datastore cluster. There are some things to be aware of when including SDRS in a design, these include:
- It’s recommended to try and ensure that all datastores in a given datastore cluster use the same host interface protocol (e.g NFS, iSCSI), use the same RAID level and have the same performance characteristics.
- When planning to use SDRS, be aware of the configuration maximums around how many datastores and virtual disks can be in a datastore cluster. A datastore cluster is limited to 9000 virtual disks, and 64 datastores (whilst it can have as few as 2 datastores).
- vCenter can have a maximum of 256 datastore clusters.
- It’s recommended to make sure that all hosts that can access a datastore in a datastore cluster, can access every datastore in the cluster.
- By default, a VMs disks will be kept together due to a default affinity rule. This can be changed by deselecting the option to ‘Keep VMDKs Together’, on either the cluster or per-VM. There are potential performance benefits in allowing SDRS to move a given VMs VMDKs to different datastores.
- Inter-VM affinity rules are also available – allowing you to prevent disks from certain VMs to be placed on the same datastore.
Useful Resources
https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf