Syndicated

Tap into vSphere PVSCSI Performance with Separate VM Boot and Data Drives

One of the most interesting new vSphere storage features in my opinion is the new virtual disk paravirtualized SCSI (PVSCSI) controller. It has been reported that improved I/O with as much as 18% reduction in ESX 4 host CPU usage can be achieved by switching to PVSCSI. The benefits of PVSCSI performance are twofold:

  • Reduced data center power and cooling costs to when you consider the impact of tens of hosts not having to work as hard
  • A potential higher VM to host consolidation ratio when more CPU cycles are available

For reference, EMC virtualization guru Chad Sakac provided a post that explains the PVSCSI performance benefits:

http://virtualgeek.typepad.com/virtual_geek/2009/05/update-on-the-io-vsphere-performance-test.html

However, to take advantage of PVSCSI a VM virtual disk configuration might need to change. Because VMware does not support PVSCSI on the operating system boot partition, VMs will need to be configured with separate virtual disks(.vmdk) for the boot drive and the data drive(s). Note that all the posts and articles referenced mention that PVSCSI works on a .vmdk containing the boot partition. It’s just that VMware officially does not support it.

So, the challenge for using PVSCSI then is to migrate services and applications that exist on VMs that contain both the boot partition and the data on a single .vmdk. Although separate boot and data partitions are the defacto standard for physical servers, the convenience of VMs has lead to a single .vmdk configuration in a lot of IT shops.

Incentive to use PVSCSI therefore actually overlaps with a shift in VM deployment strategy and ultimately supports and provides performance reasons to adopt smaller, dedicated .vmdks for boot partitions. This multi .vmdk design change also has other benefits including optimization of deduplication and DR site replication technologies.

Here are some quick thoughts on deploying and migrating VMs to a multiple .vmdk configuration.

  • Build a golden image VM template with multiple .vmdks, or change future VM deployment policy to include adding new .vmdks for installing applications and storing data.
  • For VMs that already have separate partitions on a single .vmdk use VMware Converter or another tool to V2V to a new VM with separate .vmdks for each partition.
  • When possible, make sure P2V migrations of physical servers result in a separate .vmdk for each partition
  • Unfortunately, building new VMs and reinstalling the applications may be the only choice for existing implementations combined on a single partition .vmdk

Once the .vmdk configuration is ready, PVSCSI can be enabled following the processes explained in these posts:

The final question may be “is it worth the effort to migrate to a PVSCSI supporting configuration for all VMs”? The performance, consolidation, and cost savings factors would lead most virtual administrators to answer “yes”, but ultimately the decision will most likely be made on a VM by VM basis. There are some other factors to consider as well. For example, vSphere Fault Tolerance cannot be enabled on a VM using PVSCSI.

VMware’s PDF on the new vSphere storage features can be found at http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R1.pdf.

About the author

Rich Brambley

Leave a Comment