Syndicated

P2V strategy for a Physical Server with an iSCSI Partition

Most physical to virtual migrations (P2V) of servers end up as virtual machines with the partitions encapsulated in virtual disk (.vmdk or .vhd) files. But what if the physical server already has a partition that’s configured through an iSCSI connection to the SAN, and what if that’s the same SAN that the new VM will run on? Of course, the new VM will have to be on a different LUN (formatted for use by the virtualization host), but should you encapsulate the current NTFS iSCSI partition or should you maintain the iSCSI initiator within the resulting VM? The former option depends on how much available SAN space you have to work with, the latter requires some extra thinking before you begin.

When you decide to maintain a server’s existing iSCSI partitions as a VM, there are several configuration considerations to plan for.

Multipathing Support for iSCSI is no longer needed in the VM

When you were configuring the iSCSI initiator chances are you used two physical network interface cards (NICs) for a redundant connection in the server operating system to the storage. You then used the NIC manufacturer’s drivers/management software to create a team and a virtual ip address. Your SAN was configured to allow an iSCSI initiator to connect via that NIC team virtual ip address.

As a VM that same team ip address will probably still be maintained as the initiator, but the need for two NICs and the former manufacturer’s drvers and software will be removed. The VM only needs a single vNIC to the iSCSI storage. The virtual host should be configured with a vSwitch mapped to two pNICs. Therefore the virtual host provides the redundant connection to the storage.

Be sure to remove the team configuration and the old NIC drivers and software.

Dedicate a vSwitch with it’s own pNICs for the VM iSCSI traffic

Separate the VM’s iSCSI traffic from the virtualization host’s iSCSI traffic. You could add an extra portgroup to your iSCSI vSwitch in VMware ESX for example, but  ideally, you want 2 NICs dedicated to the host, and 2  other NICs dedicated to the VM(s). This requires separate vSwitches. This will maximize performance and provide redundancy.    

Consider the cables needed to the SAN switches

Before P2V each server needed 2 cables to the storage switches for redundancy. After P2V, each virtualization host will need 4 cables. Two of the cables will be replaced by the connection to the host’s dedicated LUNs where the VM’s operating systems and and other partitions are encapsulated. The second set of two cables will be for the VM’s initiator  to access  it’s own iSCSI partitions.  

Disconnect the iSCSI initiator before P2V

This is not a must do, but rather a safety net for the P2V migration process. Disconnecting the server’s iSCSI initiator ensures the LUNs you need to maintain will not be selectable as disks to be converted during the migration.

Be prepared to recreate any file shares and permissions

If you disconnect the iSCSI initiator as previously mentioned then be prepared to recreate any file permissions and shares that were configured. To be honest, I am not sure of the best way to prepare for this or if it’s even necessary, but in my experience I have had to recreate shares. Thank goodness it was never a complex user or department hierarchy as you can imagine the impact and administrator time needed that overlooking this would cause.

Check out this VMTN Communities thread on this topic too: VMware Communities: P2V when server has a LUN through iSCSI? …

About the author

Rich Brambley

Leave a Comment