iSCSI Multipathing and RHEV 3.0

Latest response

I was pounding my head against the wall trying to figure out why my fully redundant iSCSI storage networking systems were not showing multiple paths in the LUN/Volume assignment panel when creating a new storage domain.

I then had the pleasure of speaking with one of the RHEV experts at Red Hat.  They hit me with the stupid stick on many conceptual fronts, and I've had lots of "AHA, Eureka!" moments that I thought I'd share, in case there are other people having the same challenges I did.

RHEV is, by nature, configured for redundancy on all fronts because the Storage Pool Management role can migrate from one RHEV host to another.  This will protect you from a single link failure at any point in the SAN up to and including full RHEV Host failure.  In the case of such a failure, the storage pool manager (SPM) role will be migrated to another host in the cluster, and as soon as storage connection is verified, the virtual machines running on the failing host will be migrated to the functional host by RHEVM.  That's pretty cool in itself, but wait, there's more!

 

If you have the infrastructure for fully cross connected multipathing, you get even greater redundancy.  The RHEV expert helped me to implement Mode 1 port bonding on each of my RHEV hosts, or Active/Backup mode.  This will detect link failure on either interface, and fail over to the working interface...this protects me from multiple failures, end to end, front to back.  My storage controller has redundant active/passive controllers, so if one of the controllers fails, the passive one will take over.  If one of the iSCSI fabric switches fails, the port bonding will detect link failure and fail over to the other half of the bond pair.  Combine this with the already robust failover in the RHEV clustered host configuration, and you really have to have a major disaster to lose connectivity to your iSCSI storage SAN.

I certainly don't want to steal Red Hat's thunder, so if you are interested in the details of how to configure this kind of setup, feel free to post a comment, but I have a feeling there will be some KB articles forthcoming from Red Hat on this that will be far more thorough than what I might write.

 

Best regards,

Kermit Short

 

OK I couldn't figure out how to add an image to a comment, so I'll add it here.  Please refer to my comment below for the explanation of all this spaghetti.

 

  

Responses