All Syndicated

Storage Changes in VMware ESX 3.5 Update 4

Like clockwork, VMware has cranked out another update to their flagship enterprise product, ESX 3.5. The last update came out in early November, 2008, and included some major new functionality. What’s in store this time to intrigue storage folks? Not much.

For more information on earlier updates, see my articles:

Expanded Support for Enhanced vmxnet Adapter

 

Not specifically a storage change, but the enhanced vmxnet adapter introduced back in the original release of ESX 3.5 now works with most versions of Windows Server 2003 and XP Pro. Look for improved performance when using guest-side SMB and NFS as well as the guest iSCSI initiator. Note that you cannot select this driver when configuring non-Enterprise Edition machines; you have to select Windows Server 2003 Enterprise Edition (64-bit) regardless of which version of Server 2003 you are using.

    Expanded SAS and SATA Controller Support

    If you’d like to install ESX on a server equipped with a  PMC 8011,  Intel ICH9 or  ICH10,  CERC 6/I SATA/SAS Integrated RAID Controller, or  HP Smart Array P700m Controller, you’ll find happiness in Update 4.

    The Intel controllers are especially important, as we’re seeing them used more and more and this driver is more full-featured than the earlier Broadcom HT 1000 and Intel ICH7 drivers. The Intel ICH9/ICH10 is a dual-mode (IDE/ATA and AHCI/SATA) driver, supports SATA hard drives, SSDs, and optical drives, and now enables VMFS support when in AHCI/SATA mode. It’s not clear whether VMware actually supports VMFS datastores on ICH9/10 SATA, but it says it works. Anyone want to try it out? One thing is certain: You can’t use SATA drives in a shared/clustered environment because SATA does not include reservations. See this tech note  and especially this question:

    Earlier, it was mentioned that we can create VMFS if we use AHCI/SATA mode. If so, why did VMware not claim VMFS support when using SATA controller running in AHCI/SATA mode?

    VMware might decide to add support in the near future. There is no strong need to have VMFS support on a SATA drive, because native SATA protocol does not support reserve/release. Reserve/release is needed if VMFS is used as clustered file system in a shared disk environment.

    PXE Boot Support

    Rich at VM/ETC points out that Update 4 includes experimental PXE boot support for ESX and ESXi. As he notes, this has major implications for cloud computing platforms, since it means that ESX servers can boot guests without local storage at all. Very interesting! Let’s bet that Update 5 (expected in June or July) will include this as a supported option.

    Updated QLogic, Emulex, and LSI Drivers

    Like most ESX updates, this one included updated Fibre Channel drivers.

    • The QLogic Fibre Channel Adapter driver and firmware (versions 7.08-vm66 and 4.04.06, respectively) include bug fixes and enhanced NPIV support.
    • On the Emulex side, driver version 7.4.0.40 supports the company’s HBAnyware 4.0 management software.
    • Users of SAS and SCSI LSI MegaRAIDs will find driver version 3.19vmw (megaraid_sas)  and 2.6.48.18 vmw (mptscsi) which  improves performance and enhances event handling capabilities.

    Expanded Sun Storage Array Support

    All you StorageTek loyalists out there will be happy to see support for Sun’s  low-end StorageTek 2530 SAS array  as well as the  modular 6580 and  6780  Fibre Channel arrays. It looks like just about every model in Sun’s current storage lineup is now supported in ESX.

    Expanded Network Card Support

    Support for Gigabit cards is greatly expanded, including  HP’s quad-port NC375i and dual-port  NC362i and  NC360m, Intel’s  Gigabit CT and  82574L, and  NetXtreme’s BCM5722,  BCM5755,  BCM5755M, and  BCM5756. Intel’s widely-used 10-gig  82598EB  cards are now supported as well.

    Tweaks and Fixes

    Looking through the release notes, a few storage-related tweaks and fixes stand out:

    1. WMware can optionally automatically throttle back the queue depth when congestion is encountered. See  Controlling LUN queue depth throttling in VMware ESX for 3PAR Storage Arrays  for more information.
    2. VMklinux module heap size can now be adjusted as LUN queue-depth values are increased. Since tuning LUN queue depths is one common trick of the storage trade to improve performance, especially in queue-stingy systems like ESX, this is welcome news. But call VMware support before you monkey with it!
    3. An RDM-related issue where SCSI inquiry data over 36 bytes was truncated or corrupted (for example when using  Microsoft VSS and NetApp SnapDrive) has been resolved.

    Well, that’s all folks. I suggest you all read the release notes for yourself, and please leave a comment if you see an error in what I wrote here or have something to add!

    About the author

    Stephen Foskett

    Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage, server virtualization, networking, and cloud computing. He organizes the popular Tech Field Day event series for Gestalt IT and runs Foskett Services. A long-time voice in the storage industry, Stephen has authored numerous articles for industry publications, and is a popular presenter at industry events. He can be found online at TechFieldDay.com, blog.FoskettS.net, and on Twitter at @SFoskett.

    Leave a Comment