Syndicated

vSphere 4.0 – What’s new in vSphere Storage

This weekend I finally had the chance to  catchup on some of the new storage features released as part of vSphere 4.0,   there are quite a few  changes to cover,   some of them quite exciting.

VMFS  Upgrade

Once of the good pieces of news to come out is that the VMFS changes in vSphere are minimal.   vSphere 4.0 introduces a  minor point release (3.3.0  to 3.3.1)  with some subtle changes,   so much so that it’s not really been documented anywhere.   Most of the changes with VMFS are actually delivered within the VMFS driver at the VMKernel level,   this is where most of the I/O improvements and features such as thin provisioning  have been delivered as part of vSphere.

Upgrading VMFS was a major step in the upgrade from VMFS 2 to VMFS 3,   good to  hear  that there are  no major drivers to  upgrade VMFS as part of your vSphere upgrade.   Any new VMFS datastores created with the new vSphere hosts will of course be  VMFS 3.3.1 however this is backwardly compatible with  earlier  versions of ESX 3.x.   If you really want to move onto the new version of VNFS, format some new datastores and use Storage vMotion to move your VM’s onto the new VMFS  3.3.1  datastores.

Thin Provisioning

Thin provisioning is one of the areas that excites me most about the new vSphere release.   I conducted a very quick survey of my employers development and system test ESX environments recently  and found that currently we were only utilising 48% of virtual storage that had been provisioned.   It’s easy to see where immediate savings can be made simply by implementing vSphere and  thin provisioning.   I’ll be using that in the cost benefits case for sure!

Thin provisioning is nothing new,   it  has been available at the array level for a while now, so one of the big questions is where should I thin provision?   Well that  really depends what kind of environment you have I suppose.    Smaller customers will benefit greatly from VMware thin provisioning as they  probably don’t own  arrays capable of TP.   Bigger  companies on the other hand might well  benefit from carrying out both as they have  both the skill sets and the equipment to full utilise it at both levels.

Chad Sakac has written a superb article entitled “thin on thin where should you do thin provisioning vsphere 4.0 or array level” which goes deep into the new thin provisioning features  and the discussions around what’s the best approach. I strongly suggest people give it a read,   it explains pretty much all you need to know.

Storage vMotion

The Storage vMotion in ESX 3.5 had a few limitations which vSphere addresses.   It’s now fully integrated with vCenter as opposed to being  command line  based in the previous version,   it allows for the moving of a VM between different storage types, i.e. FC, ISCSI or NFS.    One excellent usage of  Storage vMotion is the ability  to migrate your thick vm’s and change them to thin VM’s.   Perfect for reclaiming disk space and increasing utilisation without downtime,  brilliant!

Storage vMotion has also been enhanced from an operational perspective. Previously storage vmotion involved  taking a snapshot of a disk,   copying the parent disk to it’s new location and then  taking the child snapshot and re-parenting the child disk with the parent.   This process required the  2 x the CPU and memory of the VM being migrated  in order to ensure zero downtime.   In  vSphere 4.0  Storage vMotion uses  change block tracking and a process very similar to how vMotion deals with moving active memory between hosts.    The new Storage  vMotion conducts  an iterative process scanning what blocks have been changed, each iterative scan  should result in smaller and smaller increments and when it gets down to a small enough size it conducts a very quick suspend /  resume operation as opposed to using the doubling up resources method  that it previously needed to.   Making it faster and more efficient than it was in it’s previous incarnation.

Para Virtualised SCSI

Para Virtualised SCSI (PVSCSI) is a new driver for I/O intensive  virtual machines.  VMware  compare this to  the  vmxnet adapter,   which is an enhanced and optimised network driver  providing higher performance.   PVSCSI is similar, it’s a specific driver  that offers higher I/O throughput, lower latency  and lower CPU utilisation within  virtual machines.  Figures discussed by Paul Manning on the recent Vmware community podcast  included  92%  increase  in IOPS throughput and 40%  decrease in  latency when compared  to the standard LSI / BUSLogic virtual driver.

A caveat of this technology is that the guest OS still has to boot from a non PVSCSI  adapter  (LSI /  Buslogic),    you would look to  add your PVSCSI adapter for your additional data virtual disks.   Currently only  Windows 2003, Windows 2008 and RH Linux 5 have the software drivers to take adavantage of this new adapter.

Update   – Chad Sakac has posted a new EMCWorld I/O Performance comparison of the vSphere  PVSCSI adpater vs the LSI SCSI adapter, check out the link for more details.

VMware  Storage Book

Paul Manning mentioned on the recent podcast  that VMware are planning a book dedicated  to  Virtualisation and  storage in an attempt to consolidate the amount of documentation out there on  Storage  configuration and best practice.   Currently users need to look through 600 pages of  the  SAN Config guide and vendor guidelines. VMware would hope to try boil this down to a much more manageable 100 – 150 pages.

If you can’t wait that long, Chad Sakac has written the storage chapter in Scott Lowe’s new vSphere book which I believe is available for pre-order on Amazon

vSphere Storage WhitePaper

Paul Manning who I’ve mentioned in this blog post has written  a  great 10 page white paper explaining all of these features in more detail along with some of the more experimental features I haven’t mentioned.

http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R1.pdf

About the author

Craig Stewart

I’m an IT professional with over 12+ years working in both the public and private sector within the UK. I currently work within the UK financial industry, specialising in infrastructure delivery and integration using products from a host of vendors including Microsoft, Citrix, VMware and EMC to name but a few.

I’ve had an unhealthy interest in all aspects of virtualisation since working on projects deploying VMware Virtual Infrastructure 3 and Citrix XenApp. I have followed this up by achieving certifications in a number of these technologies and trying to find the time to continue learning about and spreading the good word of virtualisation.

Hope you find something interesting here at www.gestalit.com

Leave a Comment